Jan 29 11:11:49.919170 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 29 11:11:49.919192 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 29 11:11:49.919203 kernel: BIOS-provided physical RAM map: Jan 29 11:11:49.919210 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 11:11:49.919216 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 11:11:49.919223 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 11:11:49.919230 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 29 11:11:49.919237 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 29 11:11:49.919244 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 11:11:49.919253 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 29 11:11:49.919259 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 11:11:49.919266 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 11:11:49.919277 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 11:11:49.919284 kernel: NX (Execute Disable) protection: active Jan 29 11:11:49.919291 kernel: APIC: Static calls initialized Jan 29 11:11:49.919304 kernel: SMBIOS 2.8 present. Jan 29 11:11:49.919311 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 29 11:11:49.919318 kernel: Hypervisor detected: KVM Jan 29 11:11:49.919326 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:11:49.919333 kernel: kvm-clock: using sched offset of 3242579877 cycles Jan 29 11:11:49.919341 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:11:49.919348 kernel: tsc: Detected 2794.750 MHz processor Jan 29 11:11:49.919356 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:11:49.919364 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:11:49.919371 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 29 11:11:49.919381 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 11:11:49.919389 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:11:49.919396 kernel: Using GB pages for direct mapping Jan 29 11:11:49.919404 kernel: ACPI: Early table checksum verification disabled Jan 29 11:11:49.919411 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 29 11:11:49.919418 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:11:49.919426 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:11:49.919433 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:11:49.919443 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 29 11:11:49.919450 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:11:49.919458 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:11:49.919465 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:11:49.919472 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:11:49.919480 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 29 11:11:49.919487 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 29 11:11:49.919498 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 29 11:11:49.919508 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 29 11:11:49.919515 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 29 11:11:49.919523 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 29 11:11:49.919530 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 29 11:11:49.919541 kernel: No NUMA configuration found Jan 29 11:11:49.919548 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 29 11:11:49.919556 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 29 11:11:49.919575 kernel: Zone ranges: Jan 29 11:11:49.919583 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:11:49.919590 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 29 11:11:49.919598 kernel: Normal empty Jan 29 11:11:49.919606 kernel: Movable zone start for each node Jan 29 11:11:49.919613 kernel: Early memory node ranges Jan 29 11:11:49.919621 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 11:11:49.919628 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 29 11:11:49.919636 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 29 11:11:49.919646 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:11:49.919655 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 11:11:49.919663 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 29 11:11:49.919671 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 11:11:49.919678 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:11:49.919686 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:11:49.919693 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 11:11:49.919701 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:11:49.919708 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:11:49.919718 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:11:49.919726 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:11:49.919733 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:11:49.919741 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 11:11:49.919748 kernel: TSC deadline timer available Jan 29 11:11:49.919756 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 29 11:11:49.919764 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 11:11:49.919771 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 29 11:11:49.919781 kernel: kvm-guest: setup PV sched yield Jan 29 11:11:49.919788 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 29 11:11:49.919798 kernel: Booting paravirtualized kernel on KVM Jan 29 11:11:49.919806 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:11:49.919814 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 29 11:11:49.919821 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 29 11:11:49.919856 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 29 11:11:49.919864 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 29 11:11:49.919871 kernel: kvm-guest: PV spinlocks enabled Jan 29 11:11:49.919879 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 11:11:49.919887 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 29 11:11:49.919899 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:11:49.919907 kernel: random: crng init done Jan 29 11:11:49.919914 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:11:49.919922 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:11:49.919930 kernel: Fallback order for Node 0: 0 Jan 29 11:11:49.919937 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 29 11:11:49.919945 kernel: Policy zone: DMA32 Jan 29 11:11:49.919952 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:11:49.919963 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 138948K reserved, 0K cma-reserved) Jan 29 11:11:49.919970 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 11:11:49.919978 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 11:11:49.919985 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:11:49.919993 kernel: Dynamic Preempt: voluntary Jan 29 11:11:49.920001 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:11:49.920012 kernel: rcu: RCU event tracing is enabled. Jan 29 11:11:49.920020 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 11:11:49.920028 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:11:49.920038 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:11:49.920046 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:11:49.920053 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:11:49.920064 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 11:11:49.920071 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 29 11:11:49.920079 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:11:49.920086 kernel: Console: colour VGA+ 80x25 Jan 29 11:11:49.920094 kernel: printk: console [ttyS0] enabled Jan 29 11:11:49.920102 kernel: ACPI: Core revision 20230628 Jan 29 11:11:49.920112 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 11:11:49.920119 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:11:49.920127 kernel: x2apic enabled Jan 29 11:11:49.920135 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:11:49.920142 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 29 11:11:49.920150 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 29 11:11:49.920158 kernel: kvm-guest: setup PV IPIs Jan 29 11:11:49.920175 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:11:49.920183 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 11:11:49.920191 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jan 29 11:11:49.920199 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 11:11:49.920207 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 11:11:49.920217 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 11:11:49.920225 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:11:49.920233 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:11:49.920241 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:11:49.920252 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:11:49.920259 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 11:11:49.920270 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 11:11:49.920278 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:11:49.920286 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:11:49.920294 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 11:11:49.920302 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 11:11:49.920310 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 11:11:49.920318 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:11:49.920329 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:11:49.920336 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:11:49.920344 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:11:49.920352 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 11:11:49.920360 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:11:49.920368 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:11:49.920376 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:11:49.920384 kernel: landlock: Up and running. Jan 29 11:11:49.920391 kernel: SELinux: Initializing. Jan 29 11:11:49.920402 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:11:49.920410 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:11:49.920418 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 11:11:49.920426 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:11:49.920434 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:11:49.920442 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:11:49.920450 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 11:11:49.920460 kernel: ... version: 0 Jan 29 11:11:49.920470 kernel: ... bit width: 48 Jan 29 11:11:49.920478 kernel: ... generic registers: 6 Jan 29 11:11:49.920486 kernel: ... value mask: 0000ffffffffffff Jan 29 11:11:49.920494 kernel: ... max period: 00007fffffffffff Jan 29 11:11:49.920501 kernel: ... fixed-purpose events: 0 Jan 29 11:11:49.920509 kernel: ... event mask: 000000000000003f Jan 29 11:11:49.920517 kernel: signal: max sigframe size: 1776 Jan 29 11:11:49.920525 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:11:49.920533 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:11:49.920543 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:11:49.920551 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:11:49.920566 kernel: .... node #0, CPUs: #1 #2 #3 Jan 29 11:11:49.920574 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 11:11:49.920581 kernel: smpboot: Max logical packages: 1 Jan 29 11:11:49.920589 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jan 29 11:11:49.920597 kernel: devtmpfs: initialized Jan 29 11:11:49.920605 kernel: x86/mm: Memory block size: 128MB Jan 29 11:11:49.920613 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:11:49.920621 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 11:11:49.920631 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:11:49.920639 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:11:49.920647 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:11:49.920655 kernel: audit: type=2000 audit(1738149109.241:1): state=initialized audit_enabled=0 res=1 Jan 29 11:11:49.920663 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:11:49.920671 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:11:49.920678 kernel: cpuidle: using governor menu Jan 29 11:11:49.920686 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:11:49.920694 kernel: dca service started, version 1.12.1 Jan 29 11:11:49.920705 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 11:11:49.920712 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 11:11:49.920720 kernel: PCI: Using configuration type 1 for base access Jan 29 11:11:49.920728 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:11:49.920736 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:11:49.920744 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:11:49.920752 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:11:49.920760 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:11:49.920770 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:11:49.920778 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:11:49.920786 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:11:49.920794 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:11:49.920801 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:11:49.920809 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:11:49.920817 kernel: ACPI: Interpreter enabled Jan 29 11:11:49.920836 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 11:11:49.920844 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:11:49.920852 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:11:49.920863 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:11:49.920871 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 11:11:49.920879 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:11:49.921084 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:11:49.921223 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 11:11:49.921353 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 11:11:49.921365 kernel: PCI host bridge to bus 0000:00 Jan 29 11:11:49.921513 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:11:49.921644 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:11:49.921764 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:11:49.921910 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 29 11:11:49.922031 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 11:11:49.922149 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 29 11:11:49.922265 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:11:49.922433 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 11:11:49.922598 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 29 11:11:49.922729 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 29 11:11:49.922901 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 29 11:11:49.923033 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 29 11:11:49.923159 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:11:49.923315 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:11:49.923446 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 29 11:11:49.923590 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 29 11:11:49.923721 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 29 11:11:49.923889 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:11:49.924024 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 11:11:49.924152 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 29 11:11:49.924286 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 29 11:11:49.924434 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:11:49.924575 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 29 11:11:49.924705 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 29 11:11:49.924846 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 29 11:11:49.924980 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 29 11:11:49.925124 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 11:11:49.925259 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 11:11:49.925412 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 11:11:49.925545 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 29 11:11:49.925684 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 29 11:11:49.925845 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 11:11:49.925979 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 29 11:11:49.925995 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:11:49.926004 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:11:49.926012 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:11:49.926020 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:11:49.926028 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 11:11:49.926035 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 11:11:49.926043 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 11:11:49.926051 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 11:11:49.926059 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 11:11:49.926069 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 11:11:49.926077 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 11:11:49.926085 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 11:11:49.926093 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 11:11:49.926101 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 11:11:49.926108 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 11:11:49.926116 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 11:11:49.926124 kernel: iommu: Default domain type: Translated Jan 29 11:11:49.926132 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:11:49.926142 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:11:49.926150 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:11:49.926157 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 11:11:49.926165 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 29 11:11:49.926292 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 11:11:49.926421 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 11:11:49.926548 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:11:49.926567 kernel: vgaarb: loaded Jan 29 11:11:49.926576 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 11:11:49.926587 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 11:11:49.926595 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:11:49.926603 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:11:49.926612 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:11:49.926619 kernel: pnp: PnP ACPI init Jan 29 11:11:49.926780 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 11:11:49.926793 kernel: pnp: PnP ACPI: found 6 devices Jan 29 11:11:49.926801 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:11:49.926812 kernel: NET: Registered PF_INET protocol family Jan 29 11:11:49.926820 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:11:49.926851 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:11:49.926859 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:11:49.926867 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:11:49.926875 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:11:49.926882 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:11:49.926890 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:11:49.926898 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:11:49.926909 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:11:49.926917 kernel: NET: Registered PF_XDP protocol family Jan 29 11:11:49.927040 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:11:49.927157 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:11:49.927275 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:11:49.927397 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 29 11:11:49.927515 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 11:11:49.927658 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 29 11:11:49.927678 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:11:49.927688 kernel: Initialise system trusted keyrings Jan 29 11:11:49.927698 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:11:49.927706 kernel: Key type asymmetric registered Jan 29 11:11:49.927714 kernel: Asymmetric key parser 'x509' registered Jan 29 11:11:49.927722 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:11:49.927730 kernel: io scheduler mq-deadline registered Jan 29 11:11:49.927738 kernel: io scheduler kyber registered Jan 29 11:11:49.927746 kernel: io scheduler bfq registered Jan 29 11:11:49.927756 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:11:49.927765 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 11:11:49.927773 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 11:11:49.927781 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 11:11:49.927788 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:11:49.927796 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:11:49.927804 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:11:49.927812 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:11:49.927820 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:11:49.927979 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 11:11:49.927992 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:11:49.928112 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 11:11:49.928233 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T11:11:49 UTC (1738149109) Jan 29 11:11:49.928353 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 29 11:11:49.928364 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 11:11:49.928372 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:11:49.928380 kernel: Segment Routing with IPv6 Jan 29 11:11:49.928392 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:11:49.928400 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:11:49.928408 kernel: Key type dns_resolver registered Jan 29 11:11:49.928415 kernel: IPI shorthand broadcast: enabled Jan 29 11:11:49.928423 kernel: sched_clock: Marking stable (714002511, 106658174)->(848702137, -28041452) Jan 29 11:11:49.928431 kernel: registered taskstats version 1 Jan 29 11:11:49.928439 kernel: Loading compiled-in X.509 certificates Jan 29 11:11:49.928447 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 29 11:11:49.928455 kernel: Key type .fscrypt registered Jan 29 11:11:49.928465 kernel: Key type fscrypt-provisioning registered Jan 29 11:11:49.928473 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:11:49.928481 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:11:49.928489 kernel: ima: No architecture policies found Jan 29 11:11:49.928496 kernel: clk: Disabling unused clocks Jan 29 11:11:49.928504 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 29 11:11:49.928512 kernel: Write protecting the kernel read-only data: 38912k Jan 29 11:11:49.928520 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 29 11:11:49.928528 kernel: Run /init as init process Jan 29 11:11:49.928538 kernel: with arguments: Jan 29 11:11:49.928546 kernel: /init Jan 29 11:11:49.928553 kernel: with environment: Jan 29 11:11:49.928571 kernel: HOME=/ Jan 29 11:11:49.928579 kernel: TERM=linux Jan 29 11:11:49.928586 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:11:49.928596 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:11:49.928607 systemd[1]: Detected virtualization kvm. Jan 29 11:11:49.928618 systemd[1]: Detected architecture x86-64. Jan 29 11:11:49.928626 systemd[1]: Running in initrd. Jan 29 11:11:49.928635 systemd[1]: No hostname configured, using default hostname. Jan 29 11:11:49.928643 systemd[1]: Hostname set to . Jan 29 11:11:49.928652 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:11:49.928660 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:11:49.928669 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:11:49.928678 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:11:49.928690 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:11:49.928710 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:11:49.928721 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:11:49.928730 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:11:49.928741 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:11:49.928754 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:11:49.928763 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:11:49.928772 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:11:49.928781 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:11:49.928789 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:11:49.928798 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:11:49.928807 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:11:49.928815 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:11:49.928863 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:11:49.928872 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:11:49.928881 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:11:49.928890 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:11:49.928898 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:11:49.928907 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:11:49.928916 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:11:49.928924 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:11:49.928936 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:11:49.928945 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:11:49.928953 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:11:49.928962 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:11:49.928971 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:11:49.928979 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:11:49.928988 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:11:49.928997 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:11:49.929006 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:11:49.929036 systemd-journald[193]: Collecting audit messages is disabled. Jan 29 11:11:49.929060 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:11:49.929071 systemd-journald[193]: Journal started Jan 29 11:11:49.929092 systemd-journald[193]: Runtime Journal (/run/log/journal/147afdd7a33f4bd3a90f5ce6da375d9c) is 6.0M, max 48.3M, 42.3M free. Jan 29 11:11:49.915008 systemd-modules-load[195]: Inserted module 'overlay' Jan 29 11:11:49.954180 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:11:49.954198 kernel: Bridge firewalling registered Jan 29 11:11:49.954209 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:11:49.941598 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 29 11:11:49.954515 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:11:49.956910 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:11:49.959245 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:11:49.978059 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:11:49.980681 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:11:49.981413 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:11:49.982813 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:11:49.993036 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:11:49.997192 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:11:50.002966 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:11:50.003215 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:11:50.009414 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:11:50.013498 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:11:50.021712 dracut-cmdline[223]: dracut-dracut-053 Jan 29 11:11:50.024774 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 29 11:11:50.052032 systemd-resolved[232]: Positive Trust Anchors: Jan 29 11:11:50.052048 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:11:50.052086 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:11:50.055031 systemd-resolved[232]: Defaulting to hostname 'linux'. Jan 29 11:11:50.056411 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:11:50.063097 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:11:50.110863 kernel: SCSI subsystem initialized Jan 29 11:11:50.119855 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:11:50.131863 kernel: iscsi: registered transport (tcp) Jan 29 11:11:50.151879 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:11:50.151905 kernel: QLogic iSCSI HBA Driver Jan 29 11:11:50.207345 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:11:50.218998 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:11:50.245497 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:11:50.245560 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:11:50.245575 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:11:50.287899 kernel: raid6: avx2x4 gen() 30698 MB/s Jan 29 11:11:50.304889 kernel: raid6: avx2x2 gen() 31250 MB/s Jan 29 11:11:50.321947 kernel: raid6: avx2x1 gen() 26077 MB/s Jan 29 11:11:50.322032 kernel: raid6: using algorithm avx2x2 gen() 31250 MB/s Jan 29 11:11:50.339959 kernel: raid6: .... xor() 19995 MB/s, rmw enabled Jan 29 11:11:50.340038 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:11:50.360880 kernel: xor: automatically using best checksumming function avx Jan 29 11:11:50.509889 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:11:50.525276 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:11:50.536080 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:11:50.549884 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jan 29 11:11:50.554839 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:11:50.563383 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:11:50.583172 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Jan 29 11:11:50.625056 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:11:50.631962 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:11:50.709716 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:11:50.724004 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:11:50.739065 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:11:50.743029 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:11:50.744915 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 29 11:11:50.761217 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 11:11:50.761379 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:11:50.761393 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:11:50.761404 kernel: GPT:9289727 != 19775487 Jan 29 11:11:50.761414 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:11:50.761424 kernel: GPT:9289727 != 19775487 Jan 29 11:11:50.761434 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:11:50.761444 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:11:50.748796 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:11:50.750124 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:11:50.771184 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:11:50.771248 kernel: AES CTR mode by8 optimization enabled Jan 29 11:11:50.767101 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:11:50.785107 kernel: libata version 3.00 loaded. Jan 29 11:11:50.786217 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:11:50.789984 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:11:50.790135 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:11:50.793721 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:11:50.798263 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:11:50.798448 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:11:50.800194 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:11:50.809945 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (457) Jan 29 11:11:50.813220 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (474) Jan 29 11:11:50.813119 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:11:50.817088 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 11:11:50.844501 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 11:11:50.844525 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 11:11:50.845528 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 11:11:50.845767 kernel: scsi host0: ahci Jan 29 11:11:50.846021 kernel: scsi host1: ahci Jan 29 11:11:50.846237 kernel: scsi host2: ahci Jan 29 11:11:50.846441 kernel: scsi host3: ahci Jan 29 11:11:50.846651 kernel: scsi host4: ahci Jan 29 11:11:50.846880 kernel: scsi host5: ahci Jan 29 11:11:50.847090 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 29 11:11:50.847108 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 29 11:11:50.847123 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 29 11:11:50.847138 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 29 11:11:50.847158 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 29 11:11:50.847173 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 29 11:11:50.831957 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:11:50.849809 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:11:50.881797 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:11:50.887974 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:11:50.890575 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:11:50.903118 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:11:50.917964 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:11:50.921110 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:11:50.946848 disk-uuid[568]: Primary Header is updated. Jan 29 11:11:50.946848 disk-uuid[568]: Secondary Entries is updated. Jan 29 11:11:50.946848 disk-uuid[568]: Secondary Header is updated. Jan 29 11:11:50.950452 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:11:50.951849 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:11:50.955844 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:11:51.151887 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 11:11:51.151989 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 11:11:51.152848 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 11:11:51.152876 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 11:11:51.154011 kernel: ata3.00: applying bridge limits Jan 29 11:11:51.154866 kernel: ata3.00: configured for UDMA/100 Jan 29 11:11:51.156860 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 11:11:51.159864 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 11:11:51.159902 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 11:11:51.160861 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 11:11:51.205886 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 11:11:51.220148 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:11:51.220165 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 11:11:51.957612 disk-uuid[577]: The operation has completed successfully. Jan 29 11:11:51.959138 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:11:51.991260 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:11:51.991390 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:11:52.015051 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:11:52.018318 sh[594]: Success Jan 29 11:11:52.030856 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 11:11:52.069062 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:11:52.087595 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:11:52.090709 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:11:52.125009 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 29 11:11:52.125073 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:11:52.125085 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:11:52.126151 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:11:52.126986 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:11:52.132823 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:11:52.133662 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:11:52.150022 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:11:52.151983 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:11:52.161851 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:11:52.161890 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:11:52.161906 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:11:52.164864 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:11:52.174348 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:11:52.176607 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:11:52.185739 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:11:52.194097 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:11:52.301652 ignition[690]: Ignition 2.20.0 Jan 29 11:11:52.301665 ignition[690]: Stage: fetch-offline Jan 29 11:11:52.301715 ignition[690]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:11:52.301729 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:11:52.301881 ignition[690]: parsed url from cmdline: "" Jan 29 11:11:52.301888 ignition[690]: no config URL provided Jan 29 11:11:52.301898 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:11:52.301912 ignition[690]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:11:52.301952 ignition[690]: op(1): [started] loading QEMU firmware config module Jan 29 11:11:52.301959 ignition[690]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 11:11:52.311473 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:11:52.341402 ignition[690]: op(1): [finished] loading QEMU firmware config module Jan 29 11:11:52.343150 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:11:52.365996 systemd-networkd[782]: lo: Link UP Jan 29 11:11:52.366006 systemd-networkd[782]: lo: Gained carrier Jan 29 11:11:52.367800 systemd-networkd[782]: Enumeration completed Jan 29 11:11:52.368015 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:11:52.368217 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:11:52.368221 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:11:52.370889 systemd-networkd[782]: eth0: Link UP Jan 29 11:11:52.370893 systemd-networkd[782]: eth0: Gained carrier Jan 29 11:11:52.370900 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:11:52.375562 systemd[1]: Reached target network.target - Network. Jan 29 11:11:52.390888 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:11:52.393900 ignition[690]: parsing config with SHA512: 9bff100f6590796172defa9af8002c399e3c241a440a684b3e0d570b49e7dc97ca835264052f7bb4978080b707d67b2857ec092eeec29cead1004a7ee51cf82f Jan 29 11:11:52.399042 unknown[690]: fetched base config from "system" Jan 29 11:11:52.399057 unknown[690]: fetched user config from "qemu" Jan 29 11:11:52.399623 ignition[690]: fetch-offline: fetch-offline passed Jan 29 11:11:52.399707 ignition[690]: Ignition finished successfully Jan 29 11:11:52.402246 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:11:52.403684 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:11:52.417985 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:11:52.435547 ignition[786]: Ignition 2.20.0 Jan 29 11:11:52.435558 ignition[786]: Stage: kargs Jan 29 11:11:52.435726 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:11:52.435739 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:11:52.436608 ignition[786]: kargs: kargs passed Jan 29 11:11:52.436654 ignition[786]: Ignition finished successfully Jan 29 11:11:52.443706 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:11:52.457016 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:11:52.540733 ignition[794]: Ignition 2.20.0 Jan 29 11:11:52.540746 ignition[794]: Stage: disks Jan 29 11:11:52.540963 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:11:52.540978 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:11:52.544235 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:11:52.541873 ignition[794]: disks: disks passed Jan 29 11:11:52.545742 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:11:52.541923 ignition[794]: Ignition finished successfully Jan 29 11:11:52.547727 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:11:52.549784 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:11:52.552073 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:11:52.553298 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:11:52.566150 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:11:52.579677 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:11:52.586153 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:11:52.597974 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:11:52.686864 kernel: EXT4-fs (vda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 29 11:11:52.687547 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:11:52.689101 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:11:52.708980 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:11:52.712201 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:11:52.713880 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:11:52.721391 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (812) Jan 29 11:11:52.721418 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:11:52.721432 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:11:52.721447 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:11:52.713925 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:11:52.724466 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:11:52.713951 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:11:52.726269 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:11:52.747680 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:11:52.748814 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:11:52.791942 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:11:52.797491 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:11:52.802541 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:11:52.807782 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:11:52.974337 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:11:52.995100 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:11:52.997114 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:11:53.006910 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:11:53.028728 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:11:53.070574 ignition[930]: INFO : Ignition 2.20.0 Jan 29 11:11:53.070574 ignition[930]: INFO : Stage: mount Jan 29 11:11:53.072705 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:11:53.072705 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:11:53.072705 ignition[930]: INFO : mount: mount passed Jan 29 11:11:53.072705 ignition[930]: INFO : Ignition finished successfully Jan 29 11:11:53.074245 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:11:53.084963 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:11:53.124246 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:11:53.137027 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:11:53.148875 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (941) Jan 29 11:11:53.148993 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 29 11:11:53.149011 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:11:53.150881 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:11:53.166899 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:11:53.169463 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:11:53.206605 ignition[958]: INFO : Ignition 2.20.0 Jan 29 11:11:53.209511 ignition[958]: INFO : Stage: files Jan 29 11:11:53.209511 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:11:53.209511 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:11:53.209511 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:11:53.215887 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:11:53.215887 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:11:53.225228 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:11:53.227124 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:11:53.230929 unknown[958]: wrote ssh authorized keys file for user: core Jan 29 11:11:53.233002 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:11:53.237521 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:11:53.237521 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 11:11:53.304251 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:11:53.390757 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:11:53.390757 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:11:53.395470 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 29 11:11:53.734553 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 11:11:53.831120 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:11:53.831120 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:11:53.835146 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:11:53.835146 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:11:53.835146 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:11:53.835146 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:11:53.835146 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:11:53.835146 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:11:53.835146 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:11:53.835146 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:11:53.835146 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:11:53.835146 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:11:53.835146 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:11:53.835146 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:11:53.835146 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 11:11:54.306104 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 11:11:54.312998 systemd-networkd[782]: eth0: Gained IPv6LL Jan 29 11:11:54.651026 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:11:54.651026 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 11:11:54.655657 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:11:54.655657 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:11:54.655657 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 11:11:54.655657 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 29 11:11:54.655657 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:11:54.655657 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:11:54.655657 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 29 11:11:54.655657 ignition[958]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:11:54.678366 ignition[958]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:11:54.683079 ignition[958]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:11:54.685205 ignition[958]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:11:54.685205 ignition[958]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:11:54.688549 ignition[958]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:11:54.690356 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:11:54.692506 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:11:54.694491 ignition[958]: INFO : files: files passed Jan 29 11:11:54.695411 ignition[958]: INFO : Ignition finished successfully Jan 29 11:11:54.699534 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:11:54.707977 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:11:54.710785 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:11:54.712706 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:11:54.712818 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:11:54.739230 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 11:11:54.743662 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:11:54.743662 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:11:54.749039 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:11:54.746138 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:11:54.749231 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:11:54.763959 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:11:54.788545 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:11:54.788688 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:11:54.791082 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:11:54.793240 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:11:54.795258 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:11:54.796120 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:11:54.814498 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:11:54.827061 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:11:54.837988 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:11:54.839313 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:11:54.841566 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:11:54.843605 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:11:54.843749 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:11:54.846059 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:11:54.847645 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:11:54.849704 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:11:54.851781 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:11:54.853823 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:11:54.856070 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:11:54.858217 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:11:54.860523 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:11:54.862459 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:11:54.864628 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:11:54.866428 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:11:54.866576 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:11:54.868935 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:11:54.870366 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:11:54.872458 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:11:54.872575 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:11:54.874699 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:11:54.874810 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:11:54.877199 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:11:54.877312 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:11:54.879176 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:11:54.880918 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:11:54.884923 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:11:54.886582 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:11:54.888501 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:11:54.890550 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:11:54.890651 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:11:54.892378 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:11:54.892480 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:11:54.894514 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:11:54.894633 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:11:54.897148 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:11:54.897255 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:11:54.907979 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:11:54.908926 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:11:54.909044 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:11:54.911866 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:11:54.912982 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:11:54.913143 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:11:54.915580 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:11:54.921523 ignition[1013]: INFO : Ignition 2.20.0 Jan 29 11:11:54.921523 ignition[1013]: INFO : Stage: umount Jan 29 11:11:54.921523 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:11:54.921523 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:11:54.915748 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:11:54.935415 ignition[1013]: INFO : umount: umount passed Jan 29 11:11:54.935415 ignition[1013]: INFO : Ignition finished successfully Jan 29 11:11:54.922081 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:11:54.922195 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:11:54.924010 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:11:54.924113 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:11:54.927684 systemd[1]: Stopped target network.target - Network. Jan 29 11:11:54.927767 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:11:54.927821 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:11:54.928566 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:11:54.928615 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:11:54.929047 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:11:54.929092 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:11:54.929371 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:11:54.929415 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:11:54.929913 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:11:54.930346 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:11:54.941008 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:11:54.944902 systemd-networkd[782]: eth0: DHCPv6 lease lost Jan 29 11:11:54.947410 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:11:54.947584 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:11:54.950717 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:11:54.950933 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:11:54.953639 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:11:54.953737 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:11:54.960914 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:11:54.961993 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:11:54.962055 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:11:54.964672 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:11:54.964750 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:11:54.966776 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:11:54.966846 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:11:54.969017 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:11:54.969072 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:11:54.971479 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:11:54.986085 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:11:54.987066 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:11:54.991701 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:11:54.991961 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:11:54.994297 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:11:54.994352 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:11:54.996441 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:11:54.996495 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:11:54.998640 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:11:54.998708 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:11:55.001136 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:11:55.001204 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:11:55.002773 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:11:55.002898 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:11:55.015036 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:11:55.015097 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:11:55.015156 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:11:55.015495 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:11:55.015541 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:11:55.024771 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:11:55.024921 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:11:55.280790 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:11:55.280955 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:11:55.283193 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:11:55.284983 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:11:55.285039 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:11:55.303118 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:11:55.310452 systemd[1]: Switching root. Jan 29 11:11:55.341891 systemd-journald[193]: Journal stopped Jan 29 11:11:57.048791 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 29 11:11:57.049947 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:11:57.049970 kernel: SELinux: policy capability open_perms=1 Jan 29 11:11:57.049983 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:11:57.049998 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:11:57.050010 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:11:57.050026 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:11:57.050037 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:11:57.050053 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:11:57.050065 kernel: audit: type=1403 audit(1738149116.154:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:11:57.050078 systemd[1]: Successfully loaded SELinux policy in 43.632ms. Jan 29 11:11:57.050103 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.737ms. Jan 29 11:11:57.050118 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:11:57.050131 systemd[1]: Detected virtualization kvm. Jan 29 11:11:57.050144 systemd[1]: Detected architecture x86-64. Jan 29 11:11:57.050156 systemd[1]: Detected first boot. Jan 29 11:11:57.050168 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:11:57.050180 zram_generator::config[1058]: No configuration found. Jan 29 11:11:57.050194 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:11:57.050207 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:11:57.050219 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:11:57.050236 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:11:57.050250 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:11:57.050262 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:11:57.050275 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:11:57.050288 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:11:57.050301 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:11:57.050314 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:11:57.050327 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:11:57.050342 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:11:57.050354 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:11:57.050367 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:11:57.050379 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:11:57.050392 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:11:57.050413 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:11:57.050426 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:11:57.050438 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:11:57.050451 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:11:57.050466 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:11:57.050479 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:11:57.050492 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:11:57.050504 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:11:57.050516 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:11:57.050529 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:11:57.050541 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:11:57.050554 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:11:57.050573 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:11:57.050586 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:11:57.050599 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:11:57.050612 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:11:57.050624 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:11:57.050636 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:11:57.050649 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:11:57.050662 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:11:57.050674 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:11:57.050689 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:11:57.050701 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:11:57.050713 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:11:57.050726 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:11:57.050739 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:11:57.050751 systemd[1]: Reached target machines.target - Containers. Jan 29 11:11:57.050763 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:11:57.050776 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:11:57.050791 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:11:57.050803 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:11:57.050816 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:11:57.050841 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:11:57.050855 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:11:57.050867 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:11:57.050880 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:11:57.050893 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:11:57.050906 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:11:57.050921 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:11:57.050933 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:11:57.050945 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:11:57.050957 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:11:57.050970 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:11:57.050982 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:11:57.050995 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:11:57.051006 kernel: fuse: init (API version 7.39) Jan 29 11:11:57.051018 kernel: loop: module loaded Jan 29 11:11:57.051033 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:11:57.051045 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:11:57.051057 systemd[1]: Stopped verity-setup.service. Jan 29 11:11:57.051092 systemd-journald[1121]: Collecting audit messages is disabled. Jan 29 11:11:57.051116 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:11:57.051129 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:11:57.051141 systemd-journald[1121]: Journal started Jan 29 11:11:57.051168 systemd-journald[1121]: Runtime Journal (/run/log/journal/147afdd7a33f4bd3a90f5ce6da375d9c) is 6.0M, max 48.3M, 42.3M free. Jan 29 11:11:56.759552 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:11:56.786485 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:11:56.787015 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:11:57.053586 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:11:57.054498 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:11:57.056343 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:11:57.057929 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:11:57.059285 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:11:57.060612 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:11:57.062039 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:11:57.063774 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:11:57.064026 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:11:57.065718 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:11:57.065968 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:11:57.067489 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:11:57.067695 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:11:57.069307 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:11:57.069511 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:11:57.071022 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:11:57.071226 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:11:57.072685 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:11:57.074177 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:11:57.093554 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:11:57.094859 kernel: ACPI: bus type drm_connector registered Jan 29 11:11:57.097006 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:11:57.097184 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:11:57.115276 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:11:57.128962 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:11:57.132276 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:11:57.133505 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:11:57.133551 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:11:57.136047 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:11:57.139438 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:11:57.142842 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:11:57.144233 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:11:57.264073 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:11:57.267524 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:11:57.268782 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:11:57.274647 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:11:57.279579 systemd-journald[1121]: Time spent on flushing to /var/log/journal/147afdd7a33f4bd3a90f5ce6da375d9c is 24.087ms for 949 entries. Jan 29 11:11:57.279579 systemd-journald[1121]: System Journal (/var/log/journal/147afdd7a33f4bd3a90f5ce6da375d9c) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:11:57.504661 systemd-journald[1121]: Received client request to flush runtime journal. Jan 29 11:11:57.477400 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:11:57.482046 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:11:57.489101 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:11:57.494569 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:11:57.496515 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:11:57.498404 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:11:57.500139 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:11:57.504626 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:11:57.506861 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:11:57.508763 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:11:57.518767 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:11:57.530031 kernel: loop0: detected capacity change from 0 to 138184 Jan 29 11:11:57.532062 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:11:57.536672 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:11:57.540003 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:11:57.543282 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:11:57.561224 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 11:11:57.610866 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:11:57.620980 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:11:57.634422 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:11:57.638858 kernel: loop1: detected capacity change from 0 to 205544 Jan 29 11:11:57.770508 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 29 11:11:57.770533 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 29 11:11:57.780821 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:11:57.785139 kernel: loop2: detected capacity change from 0 to 141000 Jan 29 11:11:57.841103 kernel: loop3: detected capacity change from 0 to 138184 Jan 29 11:11:57.860760 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:11:57.862898 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:11:58.058865 kernel: loop4: detected capacity change from 0 to 205544 Jan 29 11:11:58.069876 kernel: loop5: detected capacity change from 0 to 141000 Jan 29 11:11:58.089421 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 11:11:58.090119 (sd-merge)[1195]: Merged extensions into '/usr'. Jan 29 11:11:58.095606 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:11:58.095623 systemd[1]: Reloading... Jan 29 11:11:58.162166 ldconfig[1159]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:11:58.189856 zram_generator::config[1221]: No configuration found. Jan 29 11:11:58.329976 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:11:58.396606 systemd[1]: Reloading finished in 300 ms. Jan 29 11:11:58.433407 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:11:58.435134 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:11:58.452060 systemd[1]: Starting ensure-sysext.service... Jan 29 11:11:58.455081 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:11:58.464957 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:11:58.464986 systemd[1]: Reloading... Jan 29 11:11:58.493001 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:11:58.493299 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:11:58.494382 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:11:58.494742 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 29 11:11:58.494893 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 29 11:11:58.499606 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:11:58.499767 systemd-tmpfiles[1260]: Skipping /boot Jan 29 11:11:58.517674 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:11:58.517871 systemd-tmpfiles[1260]: Skipping /boot Jan 29 11:11:58.527029 zram_generator::config[1287]: No configuration found. Jan 29 11:11:58.658713 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:11:58.710692 systemd[1]: Reloading finished in 245 ms. Jan 29 11:11:58.732031 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:11:58.745552 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:11:58.756055 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:11:58.758664 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:11:58.761217 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:11:58.767076 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:11:58.771942 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:11:58.783102 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:11:58.787951 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:11:58.788155 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:11:58.792899 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:11:58.803391 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:11:58.809064 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:11:58.810541 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:11:58.813068 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:11:58.814458 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:11:58.815892 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:11:58.817762 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:11:58.818018 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:11:58.827345 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:11:58.827599 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:11:58.833328 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:11:58.833602 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:11:58.834385 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Jan 29 11:11:58.836553 augenrules[1355]: No rules Jan 29 11:11:58.838489 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:11:58.838819 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:11:58.845703 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:11:58.846275 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:11:58.855140 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:11:58.865739 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:11:58.869559 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:11:58.871300 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:11:58.874922 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:11:58.876185 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:11:58.877344 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:11:58.880785 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:11:58.882706 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:11:58.884924 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:11:58.887439 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:11:58.887809 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:11:58.893846 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:11:58.894136 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:11:58.896290 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:11:58.896483 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:11:58.911971 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:11:58.920484 systemd[1]: Finished ensure-sysext.service. Jan 29 11:11:58.927355 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:11:58.999651 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:11:59.001032 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:11:59.006575 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:11:59.016984 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:11:59.021517 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:11:59.024018 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:11:59.025396 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:11:59.028465 systemd-resolved[1329]: Positive Trust Anchors: Jan 29 11:11:59.028487 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:11:59.028527 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:11:59.030333 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:11:59.037010 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:11:59.038371 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:11:59.038398 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:11:59.039246 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:11:59.039891 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:11:59.040107 systemd-resolved[1329]: Defaulting to hostname 'linux'. Jan 29 11:11:59.043800 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 11:11:59.045505 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:11:59.047254 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:11:59.047520 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:11:59.050087 augenrules[1400]: /sbin/augenrules: No change Jan 29 11:11:59.050578 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:11:59.053739 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:11:59.060874 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1382) Jan 29 11:11:59.118926 augenrules[1432]: No rules Jan 29 11:11:59.095578 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:11:59.095800 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:11:59.126305 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:11:59.126686 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:11:59.128858 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:11:59.129691 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:11:59.150121 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:11:59.170852 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 11:11:59.175854 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:11:59.182581 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:11:59.189821 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:11:59.191641 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:11:59.198857 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 11:11:59.202651 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:11:59.204026 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 11:11:59.208767 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 11:11:59.212997 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 11:11:59.217785 systemd-networkd[1416]: lo: Link UP Jan 29 11:11:59.219378 systemd-networkd[1416]: lo: Gained carrier Jan 29 11:11:59.222314 systemd-networkd[1416]: Enumeration completed Jan 29 11:11:59.227118 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:11:59.230990 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:11:59.231219 systemd-networkd[1416]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:11:59.231223 systemd-networkd[1416]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:11:59.233076 systemd[1]: Reached target network.target - Network. Jan 29 11:11:59.233444 systemd-networkd[1416]: eth0: Link UP Jan 29 11:11:59.233448 systemd-networkd[1416]: eth0: Gained carrier Jan 29 11:11:59.233463 systemd-networkd[1416]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:11:59.236400 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:11:59.241216 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:11:59.248903 systemd-networkd[1416]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:11:59.250370 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Jan 29 11:11:59.713113 systemd-timesyncd[1417]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 11:11:59.713168 systemd-timesyncd[1417]: Initial clock synchronization to Wed 2025-01-29 11:11:59.713007 UTC. Jan 29 11:11:59.713202 systemd-resolved[1329]: Clock change detected. Flushing caches. Jan 29 11:11:59.716988 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:11:59.805112 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:11:59.811875 kernel: kvm_amd: TSC scaling supported Jan 29 11:11:59.811913 kernel: kvm_amd: Nested Virtualization enabled Jan 29 11:11:59.811936 kernel: kvm_amd: Nested Paging enabled Jan 29 11:11:59.811950 kernel: kvm_amd: LBR virtualization supported Jan 29 11:11:59.812983 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 29 11:11:59.813021 kernel: kvm_amd: Virtual GIF supported Jan 29 11:11:59.832434 kernel: EDAC MC: Ver: 3.0.0 Jan 29 11:11:59.861710 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:11:59.878585 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:11:59.888016 lvm[1458]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:11:59.927161 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:11:59.928824 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:11:59.929984 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:11:59.931194 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:11:59.932519 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:11:59.934085 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:11:59.935364 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:11:59.936689 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:11:59.938008 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:11:59.938053 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:11:59.939017 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:11:59.941043 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:11:59.944031 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:11:59.952265 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:11:59.954966 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:11:59.956694 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:11:59.957983 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:11:59.959040 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:11:59.960078 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:11:59.960111 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:11:59.961228 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:11:59.963462 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:11:59.965269 lvm[1462]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:11:59.968430 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:11:59.971578 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:11:59.972737 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:11:59.974609 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:11:59.981985 jq[1465]: false Jan 29 11:11:59.982031 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:11:59.984460 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:11:59.987576 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:11:59.994528 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:11:59.997129 extend-filesystems[1466]: Found loop3 Jan 29 11:11:59.997129 extend-filesystems[1466]: Found loop4 Jan 29 11:11:59.997129 extend-filesystems[1466]: Found loop5 Jan 29 11:11:59.997129 extend-filesystems[1466]: Found sr0 Jan 29 11:11:59.997129 extend-filesystems[1466]: Found vda Jan 29 11:11:59.997129 extend-filesystems[1466]: Found vda1 Jan 29 11:11:59.997129 extend-filesystems[1466]: Found vda2 Jan 29 11:11:59.997129 extend-filesystems[1466]: Found vda3 Jan 29 11:11:59.997129 extend-filesystems[1466]: Found usr Jan 29 11:11:59.997129 extend-filesystems[1466]: Found vda4 Jan 29 11:11:59.997129 extend-filesystems[1466]: Found vda6 Jan 29 11:11:59.997129 extend-filesystems[1466]: Found vda7 Jan 29 11:11:59.997129 extend-filesystems[1466]: Found vda9 Jan 29 11:11:59.997129 extend-filesystems[1466]: Checking size of /dev/vda9 Jan 29 11:11:59.996073 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:11:59.996697 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:11:59.997537 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:12:00.001963 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:12:00.023246 jq[1475]: true Jan 29 11:12:00.006337 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:12:00.032955 dbus-daemon[1464]: [system] SELinux support is enabled Jan 29 11:12:00.010985 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:12:00.011496 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:12:00.016623 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:12:00.016848 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:12:00.038487 update_engine[1474]: I20250129 11:12:00.036650 1474 main.cc:92] Flatcar Update Engine starting Jan 29 11:12:00.038487 update_engine[1474]: I20250129 11:12:00.038273 1474 update_check_scheduler.cc:74] Next update check in 5m6s Jan 29 11:12:00.038844 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:12:00.043234 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:12:00.043598 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:12:00.046242 extend-filesystems[1466]: Resized partition /dev/vda9 Jan 29 11:12:00.050029 extend-filesystems[1491]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:12:00.057091 (ntainerd)[1490]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:12:00.063432 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1397) Jan 29 11:12:00.066938 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:12:00.068443 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:12:00.068474 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:12:00.070068 tar[1479]: linux-amd64/helm Jan 29 11:12:00.072368 jq[1489]: true Jan 29 11:12:00.070835 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:12:00.070853 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:12:00.084932 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 11:12:00.090918 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:12:00.098363 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:12:00.109349 systemd-logind[1472]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 11:12:00.109383 systemd-logind[1472]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:12:00.112815 systemd-logind[1472]: New seat seat0. Jan 29 11:12:00.115784 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:12:00.302832 locksmithd[1505]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:12:00.358903 sshd_keygen[1495]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:12:00.404752 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:12:00.414699 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:12:01.409558 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 11:12:00.445237 systemd[1]: Started sshd@0-10.0.0.80:22-10.0.0.1:54046.service - OpenSSH per-connection server daemon (10.0.0.1:54046). Jan 29 11:12:00.477905 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:12:00.478170 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:12:01.410674 extend-filesystems[1491]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:12:01.410674 extend-filesystems[1491]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:12:01.410674 extend-filesystems[1491]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 11:12:01.460011 containerd[1490]: time="2025-01-29T11:12:01.410508063Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:12:00.498048 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:12:01.460392 extend-filesystems[1466]: Resized filesystem in /dev/vda9 Jan 29 11:12:00.609345 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:12:00.664941 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:12:00.790292 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:12:00.791736 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:12:01.414211 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:12:01.414516 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:12:01.463624 containerd[1490]: time="2025-01-29T11:12:01.463308954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:12:01.465904 containerd[1490]: time="2025-01-29T11:12:01.465638462Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:12:01.465904 containerd[1490]: time="2025-01-29T11:12:01.465685811Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:12:01.465904 containerd[1490]: time="2025-01-29T11:12:01.465709726Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:12:01.465994 containerd[1490]: time="2025-01-29T11:12:01.465964614Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:12:01.465994 containerd[1490]: time="2025-01-29T11:12:01.465982958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:12:01.466132 containerd[1490]: time="2025-01-29T11:12:01.466061966Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:12:01.466132 containerd[1490]: time="2025-01-29T11:12:01.466081523Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:12:01.466491 containerd[1490]: time="2025-01-29T11:12:01.466321483Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:12:01.466491 containerd[1490]: time="2025-01-29T11:12:01.466342021Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:12:01.466491 containerd[1490]: time="2025-01-29T11:12:01.466355126Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:12:01.466491 containerd[1490]: time="2025-01-29T11:12:01.466365766Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:12:01.466596 containerd[1490]: time="2025-01-29T11:12:01.466508193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:12:01.466874 containerd[1490]: time="2025-01-29T11:12:01.466799700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:12:01.467028 containerd[1490]: time="2025-01-29T11:12:01.466987291Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:12:01.467028 containerd[1490]: time="2025-01-29T11:12:01.467006557Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:12:01.467175 containerd[1490]: time="2025-01-29T11:12:01.467129768Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:12:01.467288 containerd[1490]: time="2025-01-29T11:12:01.467208516Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:12:01.563022 tar[1479]: linux-amd64/LICENSE Jan 29 11:12:01.563568 tar[1479]: linux-amd64/README.md Jan 29 11:12:01.578061 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:12:01.603724 sshd[1534]: Connection closed by authenticating user core 10.0.0.1 port 54046 [preauth] Jan 29 11:12:01.607327 systemd[1]: sshd@0-10.0.0.80:22-10.0.0.1:54046.service: Deactivated successfully. Jan 29 11:12:01.622549 systemd-networkd[1416]: eth0: Gained IPv6LL Jan 29 11:12:01.626288 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:12:01.704347 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:12:01.717696 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:12:01.720976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:12:01.723758 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:12:01.744054 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:12:01.744351 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:12:01.746177 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:12:01.769466 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:12:01.773722 bash[1519]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:12:01.776922 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:12:01.817876 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:12:01.846792 containerd[1490]: time="2025-01-29T11:12:01.846698610Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:12:01.846792 containerd[1490]: time="2025-01-29T11:12:01.846790843Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:12:01.846988 containerd[1490]: time="2025-01-29T11:12:01.846811191Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:12:01.846988 containerd[1490]: time="2025-01-29T11:12:01.846828333Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:12:01.846988 containerd[1490]: time="2025-01-29T11:12:01.846844223Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:12:01.847107 containerd[1490]: time="2025-01-29T11:12:01.847082650Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:12:01.847336 containerd[1490]: time="2025-01-29T11:12:01.847315366Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:12:01.847485 containerd[1490]: time="2025-01-29T11:12:01.847450029Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:12:01.847485 containerd[1490]: time="2025-01-29T11:12:01.847469535Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:12:01.847485 containerd[1490]: time="2025-01-29T11:12:01.847482650Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:12:01.847578 containerd[1490]: time="2025-01-29T11:12:01.847495394Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:12:01.847578 containerd[1490]: time="2025-01-29T11:12:01.847507497Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:12:01.847578 containerd[1490]: time="2025-01-29T11:12:01.847519850Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:12:01.847578 containerd[1490]: time="2025-01-29T11:12:01.847532914Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:12:01.847578 containerd[1490]: time="2025-01-29T11:12:01.847546820Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:12:01.847578 containerd[1490]: time="2025-01-29T11:12:01.847559254Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:12:01.847578 containerd[1490]: time="2025-01-29T11:12:01.847570875Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:12:01.847578 containerd[1490]: time="2025-01-29T11:12:01.847583269Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:12:01.847810 containerd[1490]: time="2025-01-29T11:12:01.847610029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:12:01.847810 containerd[1490]: time="2025-01-29T11:12:01.847626760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:12:01.847810 containerd[1490]: time="2025-01-29T11:12:01.847638462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:12:01.847810 containerd[1490]: time="2025-01-29T11:12:01.847650926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:12:01.847810 containerd[1490]: time="2025-01-29T11:12:01.847663028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:12:01.847810 containerd[1490]: time="2025-01-29T11:12:01.847675692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:12:01.847810 containerd[1490]: time="2025-01-29T11:12:01.847686973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:12:01.847810 containerd[1490]: time="2025-01-29T11:12:01.847698675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:12:01.847810 containerd[1490]: time="2025-01-29T11:12:01.847710517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:12:01.847810 containerd[1490]: time="2025-01-29T11:12:01.847724774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:12:01.847810 containerd[1490]: time="2025-01-29T11:12:01.847738890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:12:01.847810 containerd[1490]: time="2025-01-29T11:12:01.847756073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:12:01.847810 containerd[1490]: time="2025-01-29T11:12:01.847770329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:12:01.847810 containerd[1490]: time="2025-01-29T11:12:01.847788273Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:12:01.847810 containerd[1490]: time="2025-01-29T11:12:01.847814763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:12:01.848337 containerd[1490]: time="2025-01-29T11:12:01.847827867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:12:01.848337 containerd[1490]: time="2025-01-29T11:12:01.847838487Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:12:01.848337 containerd[1490]: time="2025-01-29T11:12:01.847896807Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:12:01.848337 containerd[1490]: time="2025-01-29T11:12:01.847913999Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:12:01.848337 containerd[1490]: time="2025-01-29T11:12:01.847923687Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:12:01.848337 containerd[1490]: time="2025-01-29T11:12:01.847934287Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:12:01.848337 containerd[1490]: time="2025-01-29T11:12:01.847942983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:12:01.848337 containerd[1490]: time="2025-01-29T11:12:01.847954485Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:12:01.848337 containerd[1490]: time="2025-01-29T11:12:01.847964373Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:12:01.848337 containerd[1490]: time="2025-01-29T11:12:01.847974172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:12:01.848639 containerd[1490]: time="2025-01-29T11:12:01.848249718Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:12:01.848639 containerd[1490]: time="2025-01-29T11:12:01.848291927Z" level=info msg="Connect containerd service" Jan 29 11:12:01.848639 containerd[1490]: time="2025-01-29T11:12:01.848327013Z" level=info msg="using legacy CRI server" Jan 29 11:12:01.848639 containerd[1490]: time="2025-01-29T11:12:01.848335148Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:12:01.848639 containerd[1490]: time="2025-01-29T11:12:01.848563046Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:12:01.850052 containerd[1490]: time="2025-01-29T11:12:01.850004308Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:12:01.850190 containerd[1490]: time="2025-01-29T11:12:01.850141155Z" level=info msg="Start subscribing containerd event" Jan 29 11:12:01.850234 containerd[1490]: time="2025-01-29T11:12:01.850204544Z" level=info msg="Start recovering state" Jan 29 11:12:01.850293 containerd[1490]: time="2025-01-29T11:12:01.850268604Z" level=info msg="Start event monitor" Jan 29 11:12:01.850293 containerd[1490]: time="2025-01-29T11:12:01.850282670Z" level=info msg="Start snapshots syncer" Jan 29 11:12:01.850293 containerd[1490]: time="2025-01-29T11:12:01.850291617Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:12:01.850382 containerd[1490]: time="2025-01-29T11:12:01.850298971Z" level=info msg="Start streaming server" Jan 29 11:12:01.850428 containerd[1490]: time="2025-01-29T11:12:01.850385753Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:12:01.850479 containerd[1490]: time="2025-01-29T11:12:01.850459932Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:12:01.850529 containerd[1490]: time="2025-01-29T11:12:01.850518061Z" level=info msg="containerd successfully booted in 0.664481s" Jan 29 11:12:01.850624 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:12:02.877383 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:12:02.879232 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:12:02.882287 systemd[1]: Startup finished in 857ms (kernel) + 6.438s (initrd) + 6.309s (userspace) = 13.604s. Jan 29 11:12:02.898367 agetty[1545]: failed to open credentials directory Jan 29 11:12:02.898883 (kubelet)[1582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:12:02.908557 agetty[1544]: failed to open credentials directory Jan 29 11:12:03.619608 kubelet[1582]: E0129 11:12:03.619469 1582 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:12:03.624556 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:12:03.624814 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:12:03.625298 systemd[1]: kubelet.service: Consumed 1.602s CPU time. Jan 29 11:12:11.616304 systemd[1]: Started sshd@1-10.0.0.80:22-10.0.0.1:37362.service - OpenSSH per-connection server daemon (10.0.0.1:37362). Jan 29 11:12:11.658974 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 37362 ssh2: RSA SHA256:lkD44HqWGay7B41vMEzuR4B7mrONU8O33s9NYSBbVFY Jan 29 11:12:11.660735 sshd-session[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:12:11.670238 systemd-logind[1472]: New session 1 of user core. Jan 29 11:12:11.671637 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:12:11.682802 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:12:11.697088 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:12:11.711769 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:12:11.715098 (systemd)[1599]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:12:11.832748 systemd[1599]: Queued start job for default target default.target. Jan 29 11:12:11.844883 systemd[1599]: Created slice app.slice - User Application Slice. Jan 29 11:12:11.844916 systemd[1599]: Reached target paths.target - Paths. Jan 29 11:12:11.844936 systemd[1599]: Reached target timers.target - Timers. Jan 29 11:12:11.846714 systemd[1599]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:12:11.859827 systemd[1599]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:12:11.859998 systemd[1599]: Reached target sockets.target - Sockets. Jan 29 11:12:11.860025 systemd[1599]: Reached target basic.target - Basic System. Jan 29 11:12:11.860077 systemd[1599]: Reached target default.target - Main User Target. Jan 29 11:12:11.860122 systemd[1599]: Startup finished in 138ms. Jan 29 11:12:11.860583 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:12:11.862358 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:12:11.923075 systemd[1]: Started sshd@2-10.0.0.80:22-10.0.0.1:37374.service - OpenSSH per-connection server daemon (10.0.0.1:37374). Jan 29 11:12:11.971262 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 37374 ssh2: RSA SHA256:lkD44HqWGay7B41vMEzuR4B7mrONU8O33s9NYSBbVFY Jan 29 11:12:11.973042 sshd-session[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:12:11.977572 systemd-logind[1472]: New session 2 of user core. Jan 29 11:12:11.992676 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:12:12.049924 sshd[1612]: Connection closed by 10.0.0.1 port 37374 Jan 29 11:12:12.050304 sshd-session[1610]: pam_unix(sshd:session): session closed for user core Jan 29 11:12:12.068399 systemd[1]: sshd@2-10.0.0.80:22-10.0.0.1:37374.service: Deactivated successfully. Jan 29 11:12:12.070363 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:12:12.071857 systemd-logind[1472]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:12:12.083728 systemd[1]: Started sshd@3-10.0.0.80:22-10.0.0.1:37376.service - OpenSSH per-connection server daemon (10.0.0.1:37376). Jan 29 11:12:12.084811 systemd-logind[1472]: Removed session 2. Jan 29 11:12:12.120778 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 37376 ssh2: RSA SHA256:lkD44HqWGay7B41vMEzuR4B7mrONU8O33s9NYSBbVFY Jan 29 11:12:12.122321 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:12:12.126425 systemd-logind[1472]: New session 3 of user core. Jan 29 11:12:12.136531 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:12:12.186962 sshd[1619]: Connection closed by 10.0.0.1 port 37376 Jan 29 11:12:12.187316 sshd-session[1617]: pam_unix(sshd:session): session closed for user core Jan 29 11:12:12.200442 systemd[1]: sshd@3-10.0.0.80:22-10.0.0.1:37376.service: Deactivated successfully. Jan 29 11:12:12.202429 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:12:12.204062 systemd-logind[1472]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:12:12.205311 systemd[1]: Started sshd@4-10.0.0.80:22-10.0.0.1:37388.service - OpenSSH per-connection server daemon (10.0.0.1:37388). Jan 29 11:12:12.206104 systemd-logind[1472]: Removed session 3. Jan 29 11:12:12.263766 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 37388 ssh2: RSA SHA256:lkD44HqWGay7B41vMEzuR4B7mrONU8O33s9NYSBbVFY Jan 29 11:12:12.265794 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:12:12.270032 systemd-logind[1472]: New session 4 of user core. Jan 29 11:12:12.279535 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:12:12.336050 sshd[1626]: Connection closed by 10.0.0.1 port 37388 Jan 29 11:12:12.336533 sshd-session[1624]: pam_unix(sshd:session): session closed for user core Jan 29 11:12:12.347136 systemd[1]: sshd@4-10.0.0.80:22-10.0.0.1:37388.service: Deactivated successfully. Jan 29 11:12:12.348805 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:12:12.350280 systemd-logind[1472]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:12:12.351532 systemd[1]: Started sshd@5-10.0.0.80:22-10.0.0.1:37398.service - OpenSSH per-connection server daemon (10.0.0.1:37398). Jan 29 11:12:12.352291 systemd-logind[1472]: Removed session 4. Jan 29 11:12:12.392541 sshd[1631]: Accepted publickey for core from 10.0.0.1 port 37398 ssh2: RSA SHA256:lkD44HqWGay7B41vMEzuR4B7mrONU8O33s9NYSBbVFY Jan 29 11:12:12.393988 sshd-session[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:12:12.398641 systemd-logind[1472]: New session 5 of user core. Jan 29 11:12:12.412577 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:12:12.472524 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:12:12.472884 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:12:12.498678 sudo[1634]: pam_unix(sudo:session): session closed for user root Jan 29 11:12:12.500103 sshd[1633]: Connection closed by 10.0.0.1 port 37398 Jan 29 11:12:12.500441 sshd-session[1631]: pam_unix(sshd:session): session closed for user core Jan 29 11:12:12.511588 systemd[1]: sshd@5-10.0.0.80:22-10.0.0.1:37398.service: Deactivated successfully. Jan 29 11:12:12.514125 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:12:12.516273 systemd-logind[1472]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:12:12.528936 systemd[1]: Started sshd@6-10.0.0.80:22-10.0.0.1:37412.service - OpenSSH per-connection server daemon (10.0.0.1:37412). Jan 29 11:12:12.530138 systemd-logind[1472]: Removed session 5. Jan 29 11:12:12.568103 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 37412 ssh2: RSA SHA256:lkD44HqWGay7B41vMEzuR4B7mrONU8O33s9NYSBbVFY Jan 29 11:12:12.569864 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:12:12.574701 systemd-logind[1472]: New session 6 of user core. Jan 29 11:12:12.586551 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:12:12.643176 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:12:12.643594 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:12:12.647926 sudo[1643]: pam_unix(sudo:session): session closed for user root Jan 29 11:12:12.656666 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:12:12.657119 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:12:12.683836 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:12:12.715521 augenrules[1665]: No rules Jan 29 11:12:12.716541 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:12:12.716844 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:12:12.718098 sudo[1642]: pam_unix(sudo:session): session closed for user root Jan 29 11:12:12.719707 sshd[1641]: Connection closed by 10.0.0.1 port 37412 Jan 29 11:12:12.720054 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Jan 29 11:12:12.727440 systemd[1]: sshd@6-10.0.0.80:22-10.0.0.1:37412.service: Deactivated successfully. Jan 29 11:12:12.729424 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:12:12.731242 systemd-logind[1472]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:12:12.741852 systemd[1]: Started sshd@7-10.0.0.80:22-10.0.0.1:37414.service - OpenSSH per-connection server daemon (10.0.0.1:37414). Jan 29 11:12:12.742828 systemd-logind[1472]: Removed session 6. Jan 29 11:12:12.778760 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 37414 ssh2: RSA SHA256:lkD44HqWGay7B41vMEzuR4B7mrONU8O33s9NYSBbVFY Jan 29 11:12:12.780528 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:12:12.785765 systemd-logind[1472]: New session 7 of user core. Jan 29 11:12:12.799554 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:12:12.855654 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:12:12.856011 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:12:13.392723 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:12:13.392917 (dockerd)[1696]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:12:13.680128 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:12:13.689644 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:12:13.886523 dockerd[1696]: time="2025-01-29T11:12:13.886441515Z" level=info msg="Starting up" Jan 29 11:12:13.923106 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:12:13.931794 (kubelet)[1716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:12:14.255995 kubelet[1716]: E0129 11:12:14.255845 1716 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:12:14.261996 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:12:14.262247 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:12:14.352078 dockerd[1696]: time="2025-01-29T11:12:14.352005066Z" level=info msg="Loading containers: start." Jan 29 11:12:14.904453 kernel: Initializing XFRM netlink socket Jan 29 11:12:15.009059 systemd-networkd[1416]: docker0: Link UP Jan 29 11:12:15.059419 dockerd[1696]: time="2025-01-29T11:12:15.059333737Z" level=info msg="Loading containers: done." Jan 29 11:12:15.074721 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2926290204-merged.mount: Deactivated successfully. Jan 29 11:12:15.077439 dockerd[1696]: time="2025-01-29T11:12:15.077348966Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:12:15.077542 dockerd[1696]: time="2025-01-29T11:12:15.077511881Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 11:12:15.077716 dockerd[1696]: time="2025-01-29T11:12:15.077683874Z" level=info msg="Daemon has completed initialization" Jan 29 11:12:15.119828 dockerd[1696]: time="2025-01-29T11:12:15.119740199Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:12:15.120086 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:12:15.872608 containerd[1490]: time="2025-01-29T11:12:15.872552853Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 11:12:17.830274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2269000813.mount: Deactivated successfully. Jan 29 11:12:18.745688 containerd[1490]: time="2025-01-29T11:12:18.745614693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:18.746394 containerd[1490]: time="2025-01-29T11:12:18.746315507Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976721" Jan 29 11:12:18.747620 containerd[1490]: time="2025-01-29T11:12:18.747567304Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:18.753247 containerd[1490]: time="2025-01-29T11:12:18.753187151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:18.754566 containerd[1490]: time="2025-01-29T11:12:18.754517376Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 2.881898579s" Jan 29 11:12:18.754613 containerd[1490]: time="2025-01-29T11:12:18.754567870Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 29 11:12:18.756132 containerd[1490]: time="2025-01-29T11:12:18.756092439Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 11:12:20.312375 containerd[1490]: time="2025-01-29T11:12:20.312278928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:20.313364 containerd[1490]: time="2025-01-29T11:12:20.313267391Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701143" Jan 29 11:12:20.316452 containerd[1490]: time="2025-01-29T11:12:20.316377934Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:20.320969 containerd[1490]: time="2025-01-29T11:12:20.320907567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:20.322205 containerd[1490]: time="2025-01-29T11:12:20.322148073Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.566008596s" Jan 29 11:12:20.322205 containerd[1490]: time="2025-01-29T11:12:20.322187767Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 29 11:12:20.322986 containerd[1490]: time="2025-01-29T11:12:20.322844859Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 11:12:21.840464 containerd[1490]: time="2025-01-29T11:12:21.840346605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:21.841103 containerd[1490]: time="2025-01-29T11:12:21.841063639Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652053" Jan 29 11:12:21.842485 containerd[1490]: time="2025-01-29T11:12:21.842435221Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:21.845228 containerd[1490]: time="2025-01-29T11:12:21.845153007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:21.846736 containerd[1490]: time="2025-01-29T11:12:21.846691211Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.52380794s" Jan 29 11:12:21.846736 containerd[1490]: time="2025-01-29T11:12:21.846722209Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 29 11:12:21.847262 containerd[1490]: time="2025-01-29T11:12:21.847222066Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 11:12:24.430191 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:12:24.506795 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:12:24.744756 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:12:24.750867 (kubelet)[1985]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:12:24.856770 kubelet[1985]: E0129 11:12:24.856352 1985 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:12:24.861905 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:12:24.862205 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:12:25.171358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1368900566.mount: Deactivated successfully. Jan 29 11:12:27.193833 containerd[1490]: time="2025-01-29T11:12:27.193691805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:27.197598 containerd[1490]: time="2025-01-29T11:12:27.197524061Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 29 11:12:27.208969 containerd[1490]: time="2025-01-29T11:12:27.208925078Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:27.227910 containerd[1490]: time="2025-01-29T11:12:27.227848900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:27.228674 containerd[1490]: time="2025-01-29T11:12:27.228635815Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 5.381381699s" Jan 29 11:12:27.228745 containerd[1490]: time="2025-01-29T11:12:27.228674137Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 11:12:27.229222 containerd[1490]: time="2025-01-29T11:12:27.229193120Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:12:28.640991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3602753448.mount: Deactivated successfully. Jan 29 11:12:32.127172 containerd[1490]: time="2025-01-29T11:12:32.127090802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:32.128006 containerd[1490]: time="2025-01-29T11:12:32.127934270Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 11:12:32.129151 containerd[1490]: time="2025-01-29T11:12:32.129109957Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:32.132549 containerd[1490]: time="2025-01-29T11:12:32.132493519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:32.133819 containerd[1490]: time="2025-01-29T11:12:32.133744009Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 4.904511805s" Jan 29 11:12:32.133819 containerd[1490]: time="2025-01-29T11:12:32.133801039Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 11:12:32.134724 containerd[1490]: time="2025-01-29T11:12:32.134686007Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 11:12:32.591469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount777710746.mount: Deactivated successfully. Jan 29 11:12:32.598634 containerd[1490]: time="2025-01-29T11:12:32.598586496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:32.599316 containerd[1490]: time="2025-01-29T11:12:32.599264076Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 29 11:12:32.600438 containerd[1490]: time="2025-01-29T11:12:32.600378275Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:32.602822 containerd[1490]: time="2025-01-29T11:12:32.602781637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:32.603742 containerd[1490]: time="2025-01-29T11:12:32.603705829Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 468.98205ms" Jan 29 11:12:32.603825 containerd[1490]: time="2025-01-29T11:12:32.603741498Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 11:12:32.604311 containerd[1490]: time="2025-01-29T11:12:32.604277827Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 11:12:33.519546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1612451297.mount: Deactivated successfully. Jan 29 11:12:34.930243 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 11:12:34.942654 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:12:35.108091 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:12:35.114264 (kubelet)[2106]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:12:35.195994 kubelet[2106]: E0129 11:12:35.195823 2106 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:12:35.200466 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:12:35.200688 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:12:35.706233 containerd[1490]: time="2025-01-29T11:12:35.706168027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:35.707056 containerd[1490]: time="2025-01-29T11:12:35.706982303Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 29 11:12:35.708218 containerd[1490]: time="2025-01-29T11:12:35.708183920Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:35.711340 containerd[1490]: time="2025-01-29T11:12:35.711303312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:35.712844 containerd[1490]: time="2025-01-29T11:12:35.712808248Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.108497608s" Jan 29 11:12:35.712901 containerd[1490]: time="2025-01-29T11:12:35.712847964Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 29 11:12:37.808794 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:12:37.818643 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:12:37.854763 systemd[1]: Reloading requested from client PID 2145 ('systemctl') (unit session-7.scope)... Jan 29 11:12:37.854792 systemd[1]: Reloading... Jan 29 11:12:37.962436 zram_generator::config[2190]: No configuration found. Jan 29 11:12:38.492902 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:12:38.575006 systemd[1]: Reloading finished in 719 ms. Jan 29 11:12:38.627881 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:12:38.632996 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:12:38.633265 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:12:38.635067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:12:38.797172 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:12:38.802801 (kubelet)[2234]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:12:38.861952 kubelet[2234]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:12:38.861952 kubelet[2234]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:12:38.861952 kubelet[2234]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:12:38.862437 kubelet[2234]: I0129 11:12:38.861994 2234 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:12:39.083820 kubelet[2234]: I0129 11:12:39.083676 2234 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:12:39.083820 kubelet[2234]: I0129 11:12:39.083723 2234 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:12:39.084615 kubelet[2234]: I0129 11:12:39.084504 2234 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:12:39.138108 kubelet[2234]: I0129 11:12:39.138032 2234 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:12:39.141050 kubelet[2234]: E0129 11:12:39.141008 2234 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:12:39.181833 kubelet[2234]: E0129 11:12:39.181779 2234 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:12:39.181833 kubelet[2234]: I0129 11:12:39.181826 2234 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:12:39.206185 kubelet[2234]: I0129 11:12:39.206140 2234 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:12:39.206281 kubelet[2234]: I0129 11:12:39.206256 2234 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:12:39.206493 kubelet[2234]: I0129 11:12:39.206447 2234 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:12:39.206651 kubelet[2234]: I0129 11:12:39.206483 2234 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:12:39.206797 kubelet[2234]: I0129 11:12:39.206654 2234 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:12:39.206797 kubelet[2234]: I0129 11:12:39.206664 2234 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:12:39.206852 kubelet[2234]: I0129 11:12:39.206801 2234 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:12:39.208210 kubelet[2234]: I0129 11:12:39.208180 2234 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:12:39.208210 kubelet[2234]: I0129 11:12:39.208205 2234 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:12:39.208284 kubelet[2234]: I0129 11:12:39.208243 2234 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:12:39.208284 kubelet[2234]: I0129 11:12:39.208260 2234 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:12:39.244382 kubelet[2234]: W0129 11:12:39.244131 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Jan 29 11:12:39.244382 kubelet[2234]: E0129 11:12:39.244210 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:12:39.245871 kubelet[2234]: W0129 11:12:39.245822 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Jan 29 11:12:39.245929 kubelet[2234]: E0129 11:12:39.245877 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:12:39.252007 kubelet[2234]: I0129 11:12:39.251965 2234 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:12:39.254749 kubelet[2234]: I0129 11:12:39.254718 2234 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:12:39.254832 kubelet[2234]: W0129 11:12:39.254808 2234 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:12:39.255921 kubelet[2234]: I0129 11:12:39.255889 2234 server.go:1269] "Started kubelet" Jan 29 11:12:39.256041 kubelet[2234]: I0129 11:12:39.255976 2234 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:12:39.260455 kubelet[2234]: I0129 11:12:39.260393 2234 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:12:39.264716 kubelet[2234]: I0129 11:12:39.264143 2234 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:12:39.266155 kubelet[2234]: I0129 11:12:39.265711 2234 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:12:39.266155 kubelet[2234]: I0129 11:12:39.266103 2234 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:12:39.267017 kubelet[2234]: I0129 11:12:39.266866 2234 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:12:39.268802 kubelet[2234]: E0129 11:12:39.268165 2234 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:12:39.268802 kubelet[2234]: E0129 11:12:39.268254 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:39.268802 kubelet[2234]: I0129 11:12:39.268287 2234 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:12:39.268802 kubelet[2234]: I0129 11:12:39.268655 2234 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:12:39.268802 kubelet[2234]: I0129 11:12:39.268720 2234 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:12:39.270440 kubelet[2234]: W0129 11:12:39.269102 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Jan 29 11:12:39.270440 kubelet[2234]: E0129 11:12:39.269153 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:12:39.270440 kubelet[2234]: I0129 11:12:39.269536 2234 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:12:39.270440 kubelet[2234]: I0129 11:12:39.269617 2234 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:12:39.270440 kubelet[2234]: E0129 11:12:39.270199 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="200ms" Jan 29 11:12:39.271161 kubelet[2234]: I0129 11:12:39.271130 2234 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:12:39.272403 kubelet[2234]: E0129 11:12:39.269985 2234 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.80:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.80:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f25738416c749 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:12:39.255852873 +0000 UTC m=+0.445045351,LastTimestamp:2025-01-29 11:12:39.255852873 +0000 UTC m=+0.445045351,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:12:39.290781 kubelet[2234]: I0129 11:12:39.290725 2234 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:12:39.292056 kubelet[2234]: I0129 11:12:39.291161 2234 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:12:39.292056 kubelet[2234]: I0129 11:12:39.291173 2234 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:12:39.292056 kubelet[2234]: I0129 11:12:39.291191 2234 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:12:39.292630 kubelet[2234]: I0129 11:12:39.292612 2234 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:12:39.292709 kubelet[2234]: I0129 11:12:39.292656 2234 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:12:39.293661 kubelet[2234]: I0129 11:12:39.293490 2234 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:12:39.293661 kubelet[2234]: E0129 11:12:39.293544 2234 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:12:39.369545 kubelet[2234]: E0129 11:12:39.369368 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:39.393924 kubelet[2234]: E0129 11:12:39.393831 2234 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:12:39.470401 kubelet[2234]: E0129 11:12:39.470327 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:39.470869 kubelet[2234]: E0129 11:12:39.470811 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="400ms" Jan 29 11:12:39.571276 kubelet[2234]: E0129 11:12:39.571180 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:39.594561 kubelet[2234]: E0129 11:12:39.594478 2234 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:12:39.672360 kubelet[2234]: E0129 11:12:39.672154 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:39.773036 kubelet[2234]: E0129 11:12:39.772924 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:39.872063 kubelet[2234]: E0129 11:12:39.871973 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="800ms" Jan 29 11:12:39.874040 kubelet[2234]: E0129 11:12:39.873981 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:39.935813 kubelet[2234]: W0129 11:12:39.935756 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Jan 29 11:12:39.935998 kubelet[2234]: E0129 11:12:39.935821 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:12:39.974566 kubelet[2234]: E0129 11:12:39.974475 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:39.994710 kubelet[2234]: E0129 11:12:39.994636 2234 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:12:40.003340 kubelet[2234]: I0129 11:12:40.003286 2234 policy_none.go:49] "None policy: Start" Jan 29 11:12:40.004481 kubelet[2234]: I0129 11:12:40.004456 2234 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:12:40.004538 kubelet[2234]: I0129 11:12:40.004486 2234 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:12:40.075042 kubelet[2234]: E0129 11:12:40.074985 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:40.106768 kubelet[2234]: W0129 11:12:40.106732 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Jan 29 11:12:40.106830 kubelet[2234]: E0129 11:12:40.106773 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:12:40.175599 kubelet[2234]: E0129 11:12:40.175549 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:40.276345 kubelet[2234]: E0129 11:12:40.276167 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:40.376973 kubelet[2234]: E0129 11:12:40.376897 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:40.477620 kubelet[2234]: E0129 11:12:40.477551 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:40.578237 kubelet[2234]: E0129 11:12:40.578075 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:40.581127 kubelet[2234]: W0129 11:12:40.581091 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Jan 29 11:12:40.581205 kubelet[2234]: E0129 11:12:40.581134 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:12:40.607926 kubelet[2234]: W0129 11:12:40.607876 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Jan 29 11:12:40.607926 kubelet[2234]: E0129 11:12:40.607915 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:12:40.673014 kubelet[2234]: E0129 11:12:40.672928 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="1.6s" Jan 29 11:12:40.679136 kubelet[2234]: E0129 11:12:40.679073 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:40.780007 kubelet[2234]: E0129 11:12:40.779939 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:40.795090 kubelet[2234]: E0129 11:12:40.795050 2234 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:12:40.880735 kubelet[2234]: E0129 11:12:40.880566 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:40.951004 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:12:40.962168 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:12:40.965527 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:12:40.976674 kubelet[2234]: I0129 11:12:40.976638 2234 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:12:40.976949 kubelet[2234]: I0129 11:12:40.976931 2234 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:12:40.977020 kubelet[2234]: I0129 11:12:40.976950 2234 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:12:40.977496 kubelet[2234]: I0129 11:12:40.977250 2234 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:12:40.978581 kubelet[2234]: E0129 11:12:40.978555 2234 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 11:12:41.078941 kubelet[2234]: I0129 11:12:41.078907 2234 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:12:41.079460 kubelet[2234]: E0129 11:12:41.079397 2234 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Jan 29 11:12:41.262995 kubelet[2234]: E0129 11:12:41.262920 2234 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:12:41.280719 kubelet[2234]: I0129 11:12:41.280693 2234 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:12:41.280990 kubelet[2234]: E0129 11:12:41.280951 2234 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Jan 29 11:12:41.317695 kubelet[2234]: W0129 11:12:41.317652 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Jan 29 11:12:41.317770 kubelet[2234]: E0129 11:12:41.317704 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:12:41.682809 kubelet[2234]: I0129 11:12:41.682761 2234 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:12:41.683092 kubelet[2234]: E0129 11:12:41.683056 2234 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Jan 29 11:12:41.913113 kubelet[2234]: W0129 11:12:41.913011 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Jan 29 11:12:41.913113 kubelet[2234]: E0129 11:12:41.913060 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:12:42.273459 kubelet[2234]: E0129 11:12:42.273369 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="3.2s" Jan 29 11:12:42.406924 systemd[1]: Created slice kubepods-burstable-pod6d9e61e2f40dc43d4b8f6592fefdcb3f.slice - libcontainer container kubepods-burstable-pod6d9e61e2f40dc43d4b8f6592fefdcb3f.slice. Jan 29 11:12:42.423987 systemd[1]: Created slice kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice - libcontainer container kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice. Jan 29 11:12:42.435319 systemd[1]: Created slice kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice - libcontainer container kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice. Jan 29 11:12:42.484566 kubelet[2234]: I0129 11:12:42.484532 2234 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:12:42.484877 kubelet[2234]: E0129 11:12:42.484842 2234 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Jan 29 11:12:42.489078 kubelet[2234]: I0129 11:12:42.489034 2234 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d9e61e2f40dc43d4b8f6592fefdcb3f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6d9e61e2f40dc43d4b8f6592fefdcb3f\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:12:42.489078 kubelet[2234]: I0129 11:12:42.489071 2234 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:12:42.489078 kubelet[2234]: I0129 11:12:42.489093 2234 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:12:42.489296 kubelet[2234]: I0129 11:12:42.489108 2234 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:12:42.489296 kubelet[2234]: I0129 11:12:42.489124 2234 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:12:42.489296 kubelet[2234]: I0129 11:12:42.489165 2234 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d9e61e2f40dc43d4b8f6592fefdcb3f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6d9e61e2f40dc43d4b8f6592fefdcb3f\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:12:42.489296 kubelet[2234]: I0129 11:12:42.489199 2234 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d9e61e2f40dc43d4b8f6592fefdcb3f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6d9e61e2f40dc43d4b8f6592fefdcb3f\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:12:42.489296 kubelet[2234]: I0129 11:12:42.489252 2234 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:12:42.489457 kubelet[2234]: I0129 11:12:42.489315 2234 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:12:42.722485 kubelet[2234]: E0129 11:12:42.722451 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:42.723223 containerd[1490]: time="2025-01-29T11:12:42.723180053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6d9e61e2f40dc43d4b8f6592fefdcb3f,Namespace:kube-system,Attempt:0,}" Jan 29 11:12:42.732440 kubelet[2234]: E0129 11:12:42.732395 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:42.732851 containerd[1490]: time="2025-01-29T11:12:42.732794473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,}" Jan 29 11:12:42.738031 kubelet[2234]: E0129 11:12:42.738003 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:42.738303 containerd[1490]: time="2025-01-29T11:12:42.738274774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,}" Jan 29 11:12:42.864731 kubelet[2234]: W0129 11:12:42.864655 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Jan 29 11:12:42.865054 kubelet[2234]: E0129 11:12:42.864735 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:12:43.228744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3336223579.mount: Deactivated successfully. Jan 29 11:12:43.235665 containerd[1490]: time="2025-01-29T11:12:43.235032747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:12:43.238674 containerd[1490]: time="2025-01-29T11:12:43.238607317Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 11:12:43.239699 containerd[1490]: time="2025-01-29T11:12:43.239669281Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:12:43.241513 containerd[1490]: time="2025-01-29T11:12:43.241478192Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:12:43.242282 containerd[1490]: time="2025-01-29T11:12:43.242209970Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:12:43.243174 containerd[1490]: time="2025-01-29T11:12:43.243130005Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:12:43.244003 containerd[1490]: time="2025-01-29T11:12:43.243969718Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:12:43.245025 containerd[1490]: time="2025-01-29T11:12:43.244992137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:12:43.246084 containerd[1490]: time="2025-01-29T11:12:43.246058259Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 507.69636ms" Jan 29 11:12:43.246950 containerd[1490]: time="2025-01-29T11:12:43.246911477Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 523.631744ms" Jan 29 11:12:43.251447 containerd[1490]: time="2025-01-29T11:12:43.251394751Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 518.488255ms" Jan 29 11:12:43.373310 containerd[1490]: time="2025-01-29T11:12:43.372790053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:12:43.373310 containerd[1490]: time="2025-01-29T11:12:43.372861719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:12:43.373310 containerd[1490]: time="2025-01-29T11:12:43.372873812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:43.373310 containerd[1490]: time="2025-01-29T11:12:43.372972960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:43.374072 containerd[1490]: time="2025-01-29T11:12:43.373589400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:12:43.374072 containerd[1490]: time="2025-01-29T11:12:43.373874550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:12:43.374072 containerd[1490]: time="2025-01-29T11:12:43.373899417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:43.374356 containerd[1490]: time="2025-01-29T11:12:43.372076029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:12:43.374422 containerd[1490]: time="2025-01-29T11:12:43.374367305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:12:43.374469 containerd[1490]: time="2025-01-29T11:12:43.374435263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:43.374469 containerd[1490]: time="2025-01-29T11:12:43.374439702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:43.376293 containerd[1490]: time="2025-01-29T11:12:43.375683190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:43.396551 systemd[1]: Started cri-containerd-a57fdaf240f1bcc260e262f651ba9c00ec25a01ba5836ec2f0d419c8d5a96f18.scope - libcontainer container a57fdaf240f1bcc260e262f651ba9c00ec25a01ba5836ec2f0d419c8d5a96f18. Jan 29 11:12:43.400773 systemd[1]: Started cri-containerd-01ad309e7938ba203b7bfe3631684383822e017ccea742876c7023af86e4b01e.scope - libcontainer container 01ad309e7938ba203b7bfe3631684383822e017ccea742876c7023af86e4b01e. Jan 29 11:12:43.404100 systemd[1]: Started cri-containerd-5ddad32242861392552d5c3e94f5b84756463aa25ec526d0b42f083c0149f841.scope - libcontainer container 5ddad32242861392552d5c3e94f5b84756463aa25ec526d0b42f083c0149f841. Jan 29 11:12:43.691744 kubelet[2234]: W0129 11:12:43.691639 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Jan 29 11:12:43.691744 kubelet[2234]: E0129 11:12:43.691714 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:12:43.699102 containerd[1490]: time="2025-01-29T11:12:43.698916297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,} returns sandbox id \"01ad309e7938ba203b7bfe3631684383822e017ccea742876c7023af86e4b01e\"" Jan 29 11:12:43.701944 kubelet[2234]: E0129 11:12:43.701917 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:43.703859 containerd[1490]: time="2025-01-29T11:12:43.703819707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"a57fdaf240f1bcc260e262f651ba9c00ec25a01ba5836ec2f0d419c8d5a96f18\"" Jan 29 11:12:43.704801 kubelet[2234]: E0129 11:12:43.704773 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:43.704878 containerd[1490]: time="2025-01-29T11:12:43.704857805Z" level=info msg="CreateContainer within sandbox \"01ad309e7938ba203b7bfe3631684383822e017ccea742876c7023af86e4b01e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:12:43.706878 containerd[1490]: time="2025-01-29T11:12:43.706821971Z" level=info msg="CreateContainer within sandbox \"a57fdaf240f1bcc260e262f651ba9c00ec25a01ba5836ec2f0d419c8d5a96f18\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:12:43.719426 containerd[1490]: time="2025-01-29T11:12:43.719273036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6d9e61e2f40dc43d4b8f6592fefdcb3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ddad32242861392552d5c3e94f5b84756463aa25ec526d0b42f083c0149f841\"" Jan 29 11:12:43.720000 kubelet[2234]: E0129 11:12:43.719971 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:43.721766 containerd[1490]: time="2025-01-29T11:12:43.721730467Z" level=info msg="CreateContainer within sandbox \"5ddad32242861392552d5c3e94f5b84756463aa25ec526d0b42f083c0149f841\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:12:43.723100 containerd[1490]: time="2025-01-29T11:12:43.723073564Z" level=info msg="CreateContainer within sandbox \"01ad309e7938ba203b7bfe3631684383822e017ccea742876c7023af86e4b01e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2928f11278a25016508cb2ac24bd91a0373075a74e3fd5efcd202cff4ad85983\"" Jan 29 11:12:43.723675 containerd[1490]: time="2025-01-29T11:12:43.723634428Z" level=info msg="StartContainer for \"2928f11278a25016508cb2ac24bd91a0373075a74e3fd5efcd202cff4ad85983\"" Jan 29 11:12:43.734223 containerd[1490]: time="2025-01-29T11:12:43.734090038Z" level=info msg="CreateContainer within sandbox \"a57fdaf240f1bcc260e262f651ba9c00ec25a01ba5836ec2f0d419c8d5a96f18\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"67fe7038954dca59b4d6325ca710dd05ee08ab6112248b45e45638de2ee0e803\"" Jan 29 11:12:43.734613 containerd[1490]: time="2025-01-29T11:12:43.734588945Z" level=info msg="StartContainer for \"67fe7038954dca59b4d6325ca710dd05ee08ab6112248b45e45638de2ee0e803\"" Jan 29 11:12:43.744552 containerd[1490]: time="2025-01-29T11:12:43.744503680Z" level=info msg="CreateContainer within sandbox \"5ddad32242861392552d5c3e94f5b84756463aa25ec526d0b42f083c0149f841\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"30bb7160c08121275c76b5ea708c27a633607742f58fa3542e532f1cee07c94f\"" Jan 29 11:12:43.745065 containerd[1490]: time="2025-01-29T11:12:43.745039406Z" level=info msg="StartContainer for \"30bb7160c08121275c76b5ea708c27a633607742f58fa3542e532f1cee07c94f\"" Jan 29 11:12:43.758851 systemd[1]: Started cri-containerd-2928f11278a25016508cb2ac24bd91a0373075a74e3fd5efcd202cff4ad85983.scope - libcontainer container 2928f11278a25016508cb2ac24bd91a0373075a74e3fd5efcd202cff4ad85983. Jan 29 11:12:43.763020 systemd[1]: Started cri-containerd-67fe7038954dca59b4d6325ca710dd05ee08ab6112248b45e45638de2ee0e803.scope - libcontainer container 67fe7038954dca59b4d6325ca710dd05ee08ab6112248b45e45638de2ee0e803. Jan 29 11:12:43.784568 systemd[1]: Started cri-containerd-30bb7160c08121275c76b5ea708c27a633607742f58fa3542e532f1cee07c94f.scope - libcontainer container 30bb7160c08121275c76b5ea708c27a633607742f58fa3542e532f1cee07c94f. Jan 29 11:12:43.842258 containerd[1490]: time="2025-01-29T11:12:43.842220413Z" level=info msg="StartContainer for \"67fe7038954dca59b4d6325ca710dd05ee08ab6112248b45e45638de2ee0e803\" returns successfully" Jan 29 11:12:43.846856 containerd[1490]: time="2025-01-29T11:12:43.846744243Z" level=info msg="StartContainer for \"2928f11278a25016508cb2ac24bd91a0373075a74e3fd5efcd202cff4ad85983\" returns successfully" Jan 29 11:12:43.856633 containerd[1490]: time="2025-01-29T11:12:43.856580329Z" level=info msg="StartContainer for \"30bb7160c08121275c76b5ea708c27a633607742f58fa3542e532f1cee07c94f\" returns successfully" Jan 29 11:12:43.863206 kubelet[2234]: W0129 11:12:43.863106 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Jan 29 11:12:43.863206 kubelet[2234]: E0129 11:12:43.863179 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:12:43.874634 kubelet[2234]: E0129 11:12:43.874487 2234 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.80:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.80:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f25738416c749 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:12:39.255852873 +0000 UTC m=+0.445045351,LastTimestamp:2025-01-29 11:12:39.255852873 +0000 UTC m=+0.445045351,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:12:44.086601 kubelet[2234]: I0129 11:12:44.086448 2234 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:12:44.308181 kubelet[2234]: E0129 11:12:44.308135 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:44.310433 kubelet[2234]: E0129 11:12:44.310383 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:44.313197 kubelet[2234]: E0129 11:12:44.313170 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:44.948257 kubelet[2234]: I0129 11:12:44.948196 2234 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 11:12:44.948257 kubelet[2234]: E0129 11:12:44.948240 2234 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 29 11:12:44.962290 kubelet[2234]: E0129 11:12:44.962230 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:45.063218 kubelet[2234]: E0129 11:12:45.063148 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:45.164061 kubelet[2234]: E0129 11:12:45.164004 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:45.264519 kubelet[2234]: E0129 11:12:45.264317 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:45.279110 update_engine[1474]: I20250129 11:12:45.279036 1474 update_attempter.cc:509] Updating boot flags... Jan 29 11:12:45.315038 kubelet[2234]: E0129 11:12:45.315000 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:45.322545 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2524) Jan 29 11:12:45.358458 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2523) Jan 29 11:12:45.367000 kubelet[2234]: E0129 11:12:45.366964 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:45.395453 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2523) Jan 29 11:12:45.467657 kubelet[2234]: E0129 11:12:45.467598 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:45.568046 kubelet[2234]: E0129 11:12:45.567893 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:45.668981 kubelet[2234]: E0129 11:12:45.668919 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:45.769053 kubelet[2234]: E0129 11:12:45.769000 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:45.869556 kubelet[2234]: E0129 11:12:45.869393 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:45.970100 kubelet[2234]: E0129 11:12:45.969956 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:46.070571 kubelet[2234]: E0129 11:12:46.070496 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:46.171624 kubelet[2234]: E0129 11:12:46.171396 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:46.271582 kubelet[2234]: E0129 11:12:46.271507 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:46.372271 kubelet[2234]: E0129 11:12:46.372214 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:46.472938 kubelet[2234]: E0129 11:12:46.472859 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:46.573472 kubelet[2234]: E0129 11:12:46.573402 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:46.674161 kubelet[2234]: E0129 11:12:46.674116 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:46.768996 systemd[1]: Reloading requested from client PID 2533 ('systemctl') (unit session-7.scope)... Jan 29 11:12:46.769015 systemd[1]: Reloading... Jan 29 11:12:46.774461 kubelet[2234]: E0129 11:12:46.774390 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:46.859568 zram_generator::config[2578]: No configuration found. Jan 29 11:12:46.875113 kubelet[2234]: E0129 11:12:46.875048 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:46.971999 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:12:46.975721 kubelet[2234]: E0129 11:12:46.975661 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:47.068069 systemd[1]: Reloading finished in 298 ms. Jan 29 11:12:47.076603 kubelet[2234]: E0129 11:12:47.076544 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:47.118671 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:12:47.145207 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:12:47.145579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:12:47.161798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:12:47.311657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:12:47.322873 (kubelet)[2617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:12:47.368988 kubelet[2617]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:12:47.368988 kubelet[2617]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:12:47.368988 kubelet[2617]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:12:47.369479 kubelet[2617]: I0129 11:12:47.369044 2617 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:12:47.375109 kubelet[2617]: I0129 11:12:47.375080 2617 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:12:47.376426 kubelet[2617]: I0129 11:12:47.375202 2617 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:12:47.376426 kubelet[2617]: I0129 11:12:47.375437 2617 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:12:47.376702 kubelet[2617]: I0129 11:12:47.376675 2617 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:12:47.378440 kubelet[2617]: I0129 11:12:47.378404 2617 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:12:47.382495 kubelet[2617]: E0129 11:12:47.382447 2617 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:12:47.382495 kubelet[2617]: I0129 11:12:47.382492 2617 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:12:47.387025 kubelet[2617]: I0129 11:12:47.386991 2617 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:12:47.387133 kubelet[2617]: I0129 11:12:47.387120 2617 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:12:47.387306 kubelet[2617]: I0129 11:12:47.387263 2617 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:12:47.387457 kubelet[2617]: I0129 11:12:47.387291 2617 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:12:47.387560 kubelet[2617]: I0129 11:12:47.387464 2617 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:12:47.387560 kubelet[2617]: I0129 11:12:47.387473 2617 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:12:47.387560 kubelet[2617]: I0129 11:12:47.387507 2617 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:12:47.387642 kubelet[2617]: I0129 11:12:47.387620 2617 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:12:47.387642 kubelet[2617]: I0129 11:12:47.387637 2617 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:12:47.387691 kubelet[2617]: I0129 11:12:47.387669 2617 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:12:47.387691 kubelet[2617]: I0129 11:12:47.387686 2617 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:12:47.388554 kubelet[2617]: I0129 11:12:47.388536 2617 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:12:47.391427 kubelet[2617]: I0129 11:12:47.389071 2617 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:12:47.391427 kubelet[2617]: I0129 11:12:47.389474 2617 server.go:1269] "Started kubelet" Jan 29 11:12:47.391427 kubelet[2617]: I0129 11:12:47.391394 2617 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:12:47.391710 kubelet[2617]: I0129 11:12:47.391665 2617 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:12:47.392694 kubelet[2617]: I0129 11:12:47.392677 2617 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:12:47.393264 kubelet[2617]: I0129 11:12:47.393210 2617 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:12:47.393535 kubelet[2617]: I0129 11:12:47.393512 2617 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:12:47.393597 kubelet[2617]: I0129 11:12:47.393589 2617 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:12:47.397451 kubelet[2617]: E0129 11:12:47.397055 2617 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:47.401263 kubelet[2617]: I0129 11:12:47.401201 2617 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:12:47.401393 kubelet[2617]: I0129 11:12:47.401350 2617 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:12:47.403026 kubelet[2617]: I0129 11:12:47.402994 2617 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:12:47.403625 kubelet[2617]: I0129 11:12:47.403602 2617 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:12:47.403742 kubelet[2617]: I0129 11:12:47.403722 2617 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:12:47.403971 kubelet[2617]: I0129 11:12:47.403950 2617 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:12:47.407137 kubelet[2617]: E0129 11:12:47.407111 2617 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:12:47.414268 kubelet[2617]: I0129 11:12:47.414220 2617 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:12:47.415583 kubelet[2617]: I0129 11:12:47.415563 2617 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:12:47.415683 kubelet[2617]: I0129 11:12:47.415601 2617 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:12:47.415683 kubelet[2617]: I0129 11:12:47.415619 2617 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:12:47.415683 kubelet[2617]: E0129 11:12:47.415663 2617 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:12:47.440274 kubelet[2617]: I0129 11:12:47.440240 2617 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:12:47.440274 kubelet[2617]: I0129 11:12:47.440257 2617 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:12:47.440274 kubelet[2617]: I0129 11:12:47.440275 2617 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:12:47.443330 kubelet[2617]: I0129 11:12:47.442923 2617 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:12:47.443330 kubelet[2617]: I0129 11:12:47.442946 2617 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:12:47.443330 kubelet[2617]: I0129 11:12:47.442972 2617 policy_none.go:49] "None policy: Start" Jan 29 11:12:47.443931 kubelet[2617]: I0129 11:12:47.443908 2617 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:12:47.443977 kubelet[2617]: I0129 11:12:47.443934 2617 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:12:47.444132 kubelet[2617]: I0129 11:12:47.444106 2617 state_mem.go:75] "Updated machine memory state" Jan 29 11:12:47.448463 kubelet[2617]: I0129 11:12:47.448437 2617 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:12:47.448742 kubelet[2617]: I0129 11:12:47.448721 2617 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:12:47.448784 kubelet[2617]: I0129 11:12:47.448747 2617 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:12:47.448948 kubelet[2617]: I0129 11:12:47.448929 2617 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:12:47.554341 kubelet[2617]: I0129 11:12:47.554289 2617 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:12:47.561778 kubelet[2617]: I0129 11:12:47.561738 2617 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 29 11:12:47.561886 kubelet[2617]: I0129 11:12:47.561832 2617 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 11:12:47.605095 kubelet[2617]: I0129 11:12:47.604957 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d9e61e2f40dc43d4b8f6592fefdcb3f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6d9e61e2f40dc43d4b8f6592fefdcb3f\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:12:47.605095 kubelet[2617]: I0129 11:12:47.605000 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d9e61e2f40dc43d4b8f6592fefdcb3f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6d9e61e2f40dc43d4b8f6592fefdcb3f\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:12:47.605095 kubelet[2617]: I0129 11:12:47.605022 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d9e61e2f40dc43d4b8f6592fefdcb3f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6d9e61e2f40dc43d4b8f6592fefdcb3f\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:12:47.605095 kubelet[2617]: I0129 11:12:47.605043 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:12:47.605095 kubelet[2617]: I0129 11:12:47.605063 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:12:47.605366 kubelet[2617]: I0129 11:12:47.605087 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:12:47.605366 kubelet[2617]: I0129 11:12:47.605114 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:12:47.605366 kubelet[2617]: I0129 11:12:47.605136 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:12:47.605366 kubelet[2617]: I0129 11:12:47.605153 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:12:47.767667 sudo[2653]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 11:12:47.768023 sudo[2653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 11:12:47.827679 kubelet[2617]: E0129 11:12:47.827641 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:47.829725 kubelet[2617]: E0129 11:12:47.829681 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:47.829725 kubelet[2617]: E0129 11:12:47.829702 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:48.237396 sudo[2653]: pam_unix(sudo:session): session closed for user root Jan 29 11:12:48.387964 kubelet[2617]: I0129 11:12:48.387922 2617 apiserver.go:52] "Watching apiserver" Jan 29 11:12:48.404545 kubelet[2617]: I0129 11:12:48.404188 2617 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:12:48.426684 kubelet[2617]: E0129 11:12:48.426531 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:48.426684 kubelet[2617]: E0129 11:12:48.426612 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:48.431712 kubelet[2617]: E0129 11:12:48.431353 2617 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:12:48.431712 kubelet[2617]: E0129 11:12:48.431529 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:48.456131 kubelet[2617]: I0129 11:12:48.456066 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.456043505 podStartE2EDuration="1.456043505s" podCreationTimestamp="2025-01-29 11:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:12:48.450707546 +0000 UTC m=+1.123428198" watchObservedRunningTime="2025-01-29 11:12:48.456043505 +0000 UTC m=+1.128764157" Jan 29 11:12:48.462950 kubelet[2617]: I0129 11:12:48.462876 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.4628565359999999 podStartE2EDuration="1.462856536s" podCreationTimestamp="2025-01-29 11:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:12:48.456570221 +0000 UTC m=+1.129290873" watchObservedRunningTime="2025-01-29 11:12:48.462856536 +0000 UTC m=+1.135577178" Jan 29 11:12:48.469437 kubelet[2617]: I0129 11:12:48.469333 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.4693219100000001 podStartE2EDuration="1.46932191s" podCreationTimestamp="2025-01-29 11:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:12:48.463012681 +0000 UTC m=+1.135733333" watchObservedRunningTime="2025-01-29 11:12:48.46932191 +0000 UTC m=+1.142042552" Jan 29 11:12:49.428124 kubelet[2617]: E0129 11:12:49.428082 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:49.941113 sudo[1676]: pam_unix(sudo:session): session closed for user root Jan 29 11:12:49.942466 sshd[1675]: Connection closed by 10.0.0.1 port 37414 Jan 29 11:12:49.942797 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Jan 29 11:12:49.946948 systemd[1]: sshd@7-10.0.0.80:22-10.0.0.1:37414.service: Deactivated successfully. Jan 29 11:12:49.949037 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:12:49.949222 systemd[1]: session-7.scope: Consumed 4.537s CPU time, 147.6M memory peak, 0B memory swap peak. Jan 29 11:12:49.949845 systemd-logind[1472]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:12:49.950862 systemd-logind[1472]: Removed session 7. Jan 29 11:12:50.825344 kubelet[2617]: E0129 11:12:50.825250 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:51.688964 kubelet[2617]: E0129 11:12:51.688915 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:53.111350 kubelet[2617]: I0129 11:12:53.111263 2617 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:12:53.112085 kubelet[2617]: I0129 11:12:53.112047 2617 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:12:53.112132 containerd[1490]: time="2025-01-29T11:12:53.111800129Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:12:53.525131 kubelet[2617]: E0129 11:12:53.525094 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:54.098220 systemd[1]: Created slice kubepods-besteffort-podc30ee7af_e066_4598_948f_65ad0b932440.slice - libcontainer container kubepods-besteffort-podc30ee7af_e066_4598_948f_65ad0b932440.slice. Jan 29 11:12:54.147442 kubelet[2617]: I0129 11:12:54.147339 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c30ee7af-e066-4598-948f-65ad0b932440-cilium-config-path\") pod \"cilium-operator-5d85765b45-fksqf\" (UID: \"c30ee7af-e066-4598-948f-65ad0b932440\") " pod="kube-system/cilium-operator-5d85765b45-fksqf" Jan 29 11:12:54.147442 kubelet[2617]: I0129 11:12:54.147430 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxxvt\" (UniqueName: \"kubernetes.io/projected/c30ee7af-e066-4598-948f-65ad0b932440-kube-api-access-rxxvt\") pod \"cilium-operator-5d85765b45-fksqf\" (UID: \"c30ee7af-e066-4598-948f-65ad0b932440\") " pod="kube-system/cilium-operator-5d85765b45-fksqf" Jan 29 11:12:54.433881 kubelet[2617]: E0129 11:12:54.433853 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:54.708611 kubelet[2617]: E0129 11:12:54.708448 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:54.709087 containerd[1490]: time="2025-01-29T11:12:54.709026894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-fksqf,Uid:c30ee7af-e066-4598-948f-65ad0b932440,Namespace:kube-system,Attempt:0,}" Jan 29 11:12:54.892592 systemd[1]: Created slice kubepods-besteffort-podc08b5102_8c15_4606_b4ce_0ecc3827a9da.slice - libcontainer container kubepods-besteffort-podc08b5102_8c15_4606_b4ce_0ecc3827a9da.slice. Jan 29 11:12:54.936161 systemd[1]: Created slice kubepods-burstable-poddcbdd151_2455_4b84_934e_c7d50eedc46e.slice - libcontainer container kubepods-burstable-poddcbdd151_2455_4b84_934e_c7d50eedc46e.slice. Jan 29 11:12:54.950167 kubelet[2617]: I0129 11:12:54.950134 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-cilium-cgroup\") pod \"cilium-kpfn4\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " pod="kube-system/cilium-kpfn4" Jan 29 11:12:54.950167 kubelet[2617]: I0129 11:12:54.950168 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f5ds\" (UniqueName: \"kubernetes.io/projected/c08b5102-8c15-4606-b4ce-0ecc3827a9da-kube-api-access-4f5ds\") pod \"kube-proxy-vgc6r\" (UID: \"c08b5102-8c15-4606-b4ce-0ecc3827a9da\") " pod="kube-system/kube-proxy-vgc6r" Jan 29 11:12:54.950346 kubelet[2617]: I0129 11:12:54.950184 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-cilium-run\") pod \"cilium-kpfn4\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " pod="kube-system/cilium-kpfn4" Jan 29 11:12:54.950346 kubelet[2617]: I0129 11:12:54.950202 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-hostproc\") pod \"cilium-kpfn4\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " pod="kube-system/cilium-kpfn4" Jan 29 11:12:54.950346 kubelet[2617]: I0129 11:12:54.950223 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-lib-modules\") pod \"cilium-kpfn4\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " pod="kube-system/cilium-kpfn4" Jan 29 11:12:54.950346 kubelet[2617]: I0129 11:12:54.950243 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dcbdd151-2455-4b84-934e-c7d50eedc46e-clustermesh-secrets\") pod \"cilium-kpfn4\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " pod="kube-system/cilium-kpfn4" Jan 29 11:12:54.950346 kubelet[2617]: I0129 11:12:54.950286 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c08b5102-8c15-4606-b4ce-0ecc3827a9da-xtables-lock\") pod \"kube-proxy-vgc6r\" (UID: \"c08b5102-8c15-4606-b4ce-0ecc3827a9da\") " pod="kube-system/kube-proxy-vgc6r" Jan 29 11:12:54.950346 kubelet[2617]: I0129 11:12:54.950303 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-xtables-lock\") pod \"cilium-kpfn4\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " pod="kube-system/cilium-kpfn4" Jan 29 11:12:54.950584 kubelet[2617]: I0129 11:12:54.950315 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-host-proc-sys-kernel\") pod \"cilium-kpfn4\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " pod="kube-system/cilium-kpfn4" Jan 29 11:12:54.950584 kubelet[2617]: I0129 11:12:54.950333 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dcbdd151-2455-4b84-934e-c7d50eedc46e-hubble-tls\") pod \"cilium-kpfn4\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " pod="kube-system/cilium-kpfn4" Jan 29 11:12:54.950584 kubelet[2617]: I0129 11:12:54.950356 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c08b5102-8c15-4606-b4ce-0ecc3827a9da-kube-proxy\") pod \"kube-proxy-vgc6r\" (UID: \"c08b5102-8c15-4606-b4ce-0ecc3827a9da\") " pod="kube-system/kube-proxy-vgc6r" Jan 29 11:12:54.950584 kubelet[2617]: I0129 11:12:54.950375 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c08b5102-8c15-4606-b4ce-0ecc3827a9da-lib-modules\") pod \"kube-proxy-vgc6r\" (UID: \"c08b5102-8c15-4606-b4ce-0ecc3827a9da\") " pod="kube-system/kube-proxy-vgc6r" Jan 29 11:12:54.950584 kubelet[2617]: I0129 11:12:54.950397 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-bpf-maps\") pod \"cilium-kpfn4\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " pod="kube-system/cilium-kpfn4" Jan 29 11:12:54.950584 kubelet[2617]: I0129 11:12:54.950445 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdxxt\" (UniqueName: \"kubernetes.io/projected/dcbdd151-2455-4b84-934e-c7d50eedc46e-kube-api-access-cdxxt\") pod \"cilium-kpfn4\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " pod="kube-system/cilium-kpfn4" Jan 29 11:12:54.950763 kubelet[2617]: I0129 11:12:54.950488 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-cni-path\") pod \"cilium-kpfn4\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " pod="kube-system/cilium-kpfn4" Jan 29 11:12:54.950763 kubelet[2617]: I0129 11:12:54.950512 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-etc-cni-netd\") pod \"cilium-kpfn4\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " pod="kube-system/cilium-kpfn4" Jan 29 11:12:54.950763 kubelet[2617]: I0129 11:12:54.950533 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dcbdd151-2455-4b84-934e-c7d50eedc46e-cilium-config-path\") pod \"cilium-kpfn4\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " pod="kube-system/cilium-kpfn4" Jan 29 11:12:54.950763 kubelet[2617]: I0129 11:12:54.950551 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-host-proc-sys-net\") pod \"cilium-kpfn4\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " pod="kube-system/cilium-kpfn4" Jan 29 11:12:55.195934 kubelet[2617]: E0129 11:12:55.195873 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:55.197104 containerd[1490]: time="2025-01-29T11:12:55.196546280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vgc6r,Uid:c08b5102-8c15-4606-b4ce-0ecc3827a9da,Namespace:kube-system,Attempt:0,}" Jan 29 11:12:55.238759 kubelet[2617]: E0129 11:12:55.238707 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:55.239342 containerd[1490]: time="2025-01-29T11:12:55.239276436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kpfn4,Uid:dcbdd151-2455-4b84-934e-c7d50eedc46e,Namespace:kube-system,Attempt:0,}" Jan 29 11:12:55.326302 containerd[1490]: time="2025-01-29T11:12:55.326136324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:12:55.326467 containerd[1490]: time="2025-01-29T11:12:55.326297198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:12:55.327633 containerd[1490]: time="2025-01-29T11:12:55.326743188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:55.327633 containerd[1490]: time="2025-01-29T11:12:55.326835903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:55.343469 containerd[1490]: time="2025-01-29T11:12:55.341649867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:12:55.343469 containerd[1490]: time="2025-01-29T11:12:55.341711754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:12:55.343469 containerd[1490]: time="2025-01-29T11:12:55.341726712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:55.343469 containerd[1490]: time="2025-01-29T11:12:55.341805471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:55.347201 containerd[1490]: time="2025-01-29T11:12:55.347074272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:12:55.347201 containerd[1490]: time="2025-01-29T11:12:55.347145436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:12:55.347201 containerd[1490]: time="2025-01-29T11:12:55.347159913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:55.347527 containerd[1490]: time="2025-01-29T11:12:55.347486799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:55.357572 systemd[1]: Started cri-containerd-473e395e3b39a0ab82a350a40b5593829ba9ceedb66e1f144fd23dfd7564a830.scope - libcontainer container 473e395e3b39a0ab82a350a40b5593829ba9ceedb66e1f144fd23dfd7564a830. Jan 29 11:12:55.363268 systemd[1]: Started cri-containerd-95d7b634093926e80a99783517652702ed4be0b6137e779e795cdc9e1f053d6c.scope - libcontainer container 95d7b634093926e80a99783517652702ed4be0b6137e779e795cdc9e1f053d6c. Jan 29 11:12:55.370051 systemd[1]: Started cri-containerd-ce3f662e986fba138ca3ecd5441906b9f31c1c970483d8cfcc475ea2a9291ce1.scope - libcontainer container ce3f662e986fba138ca3ecd5441906b9f31c1c970483d8cfcc475ea2a9291ce1. Jan 29 11:12:55.403039 containerd[1490]: time="2025-01-29T11:12:55.402970495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vgc6r,Uid:c08b5102-8c15-4606-b4ce-0ecc3827a9da,Namespace:kube-system,Attempt:0,} returns sandbox id \"95d7b634093926e80a99783517652702ed4be0b6137e779e795cdc9e1f053d6c\"" Jan 29 11:12:55.403906 kubelet[2617]: E0129 11:12:55.403879 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:55.404189 containerd[1490]: time="2025-01-29T11:12:55.404165157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kpfn4,Uid:dcbdd151-2455-4b84-934e-c7d50eedc46e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce3f662e986fba138ca3ecd5441906b9f31c1c970483d8cfcc475ea2a9291ce1\"" Jan 29 11:12:55.406106 kubelet[2617]: E0129 11:12:55.406082 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:55.407542 containerd[1490]: time="2025-01-29T11:12:55.407509400Z" level=info msg="CreateContainer within sandbox \"95d7b634093926e80a99783517652702ed4be0b6137e779e795cdc9e1f053d6c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:12:55.408118 containerd[1490]: time="2025-01-29T11:12:55.408091317Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 11:12:55.424450 containerd[1490]: time="2025-01-29T11:12:55.424098010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-fksqf,Uid:c30ee7af-e066-4598-948f-65ad0b932440,Namespace:kube-system,Attempt:0,} returns sandbox id \"473e395e3b39a0ab82a350a40b5593829ba9ceedb66e1f144fd23dfd7564a830\"" Jan 29 11:12:55.424821 kubelet[2617]: E0129 11:12:55.424742 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:55.814333 containerd[1490]: time="2025-01-29T11:12:55.814262349Z" level=info msg="CreateContainer within sandbox \"95d7b634093926e80a99783517652702ed4be0b6137e779e795cdc9e1f053d6c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fff3735a1a71cc444d3f52ab0770e9d745de7b622b45b9619c56e2c581fd1207\"" Jan 29 11:12:55.814903 containerd[1490]: time="2025-01-29T11:12:55.814765898Z" level=info msg="StartContainer for \"fff3735a1a71cc444d3f52ab0770e9d745de7b622b45b9619c56e2c581fd1207\"" Jan 29 11:12:55.851558 systemd[1]: Started cri-containerd-fff3735a1a71cc444d3f52ab0770e9d745de7b622b45b9619c56e2c581fd1207.scope - libcontainer container fff3735a1a71cc444d3f52ab0770e9d745de7b622b45b9619c56e2c581fd1207. Jan 29 11:12:55.970475 containerd[1490]: time="2025-01-29T11:12:55.970402579Z" level=info msg="StartContainer for \"fff3735a1a71cc444d3f52ab0770e9d745de7b622b45b9619c56e2c581fd1207\" returns successfully" Jan 29 11:12:56.440873 kubelet[2617]: E0129 11:12:56.440838 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:57.442505 kubelet[2617]: E0129 11:12:57.442465 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:00.830627 kubelet[2617]: E0129 11:13:00.830571 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:01.034698 kubelet[2617]: I0129 11:13:01.034614 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vgc6r" podStartSLOduration=7.034590618 podStartE2EDuration="7.034590618s" podCreationTimestamp="2025-01-29 11:12:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:12:56.504671128 +0000 UTC m=+9.177391770" watchObservedRunningTime="2025-01-29 11:13:01.034590618 +0000 UTC m=+13.707311290" Jan 29 11:13:01.693115 kubelet[2617]: E0129 11:13:01.693077 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:06.812966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3132809973.mount: Deactivated successfully. Jan 29 11:13:09.657743 containerd[1490]: time="2025-01-29T11:13:09.657674925Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:09.658484 containerd[1490]: time="2025-01-29T11:13:09.658431306Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 11:13:09.659961 containerd[1490]: time="2025-01-29T11:13:09.659931717Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:09.661619 containerd[1490]: time="2025-01-29T11:13:09.661584151Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.2534638s" Jan 29 11:13:09.661666 containerd[1490]: time="2025-01-29T11:13:09.661622162Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 11:13:09.662924 containerd[1490]: time="2025-01-29T11:13:09.662896538Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 11:13:09.666068 containerd[1490]: time="2025-01-29T11:13:09.665933043Z" level=info msg="CreateContainer within sandbox \"ce3f662e986fba138ca3ecd5441906b9f31c1c970483d8cfcc475ea2a9291ce1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:13:09.683430 containerd[1490]: time="2025-01-29T11:13:09.683338568Z" level=info msg="CreateContainer within sandbox \"ce3f662e986fba138ca3ecd5441906b9f31c1c970483d8cfcc475ea2a9291ce1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"02f0dd583b2479d42ab412258d5cb582c75708b78572b2b1e94c7d8b3832bf1c\"" Jan 29 11:13:09.684094 containerd[1490]: time="2025-01-29T11:13:09.684054033Z" level=info msg="StartContainer for \"02f0dd583b2479d42ab412258d5cb582c75708b78572b2b1e94c7d8b3832bf1c\"" Jan 29 11:13:09.721579 systemd[1]: Started cri-containerd-02f0dd583b2479d42ab412258d5cb582c75708b78572b2b1e94c7d8b3832bf1c.scope - libcontainer container 02f0dd583b2479d42ab412258d5cb582c75708b78572b2b1e94c7d8b3832bf1c. Jan 29 11:13:09.754033 containerd[1490]: time="2025-01-29T11:13:09.753976050Z" level=info msg="StartContainer for \"02f0dd583b2479d42ab412258d5cb582c75708b78572b2b1e94c7d8b3832bf1c\" returns successfully" Jan 29 11:13:09.763508 systemd[1]: cri-containerd-02f0dd583b2479d42ab412258d5cb582c75708b78572b2b1e94c7d8b3832bf1c.scope: Deactivated successfully. Jan 29 11:13:10.301727 containerd[1490]: time="2025-01-29T11:13:10.301637592Z" level=info msg="shim disconnected" id=02f0dd583b2479d42ab412258d5cb582c75708b78572b2b1e94c7d8b3832bf1c namespace=k8s.io Jan 29 11:13:10.301727 containerd[1490]: time="2025-01-29T11:13:10.301724044Z" level=warning msg="cleaning up after shim disconnected" id=02f0dd583b2479d42ab412258d5cb582c75708b78572b2b1e94c7d8b3832bf1c namespace=k8s.io Jan 29 11:13:10.301727 containerd[1490]: time="2025-01-29T11:13:10.301736909Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:13:10.471971 kubelet[2617]: E0129 11:13:10.471931 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:10.475273 containerd[1490]: time="2025-01-29T11:13:10.475229114Z" level=info msg="CreateContainer within sandbox \"ce3f662e986fba138ca3ecd5441906b9f31c1c970483d8cfcc475ea2a9291ce1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:13:10.494113 containerd[1490]: time="2025-01-29T11:13:10.493919289Z" level=info msg="CreateContainer within sandbox \"ce3f662e986fba138ca3ecd5441906b9f31c1c970483d8cfcc475ea2a9291ce1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"83d86e7f799bcfee63d152819e73d696282a5a236568bed9484117f846a21333\"" Jan 29 11:13:10.496867 containerd[1490]: time="2025-01-29T11:13:10.496801905Z" level=info msg="StartContainer for \"83d86e7f799bcfee63d152819e73d696282a5a236568bed9484117f846a21333\"" Jan 29 11:13:10.527627 systemd[1]: Started cri-containerd-83d86e7f799bcfee63d152819e73d696282a5a236568bed9484117f846a21333.scope - libcontainer container 83d86e7f799bcfee63d152819e73d696282a5a236568bed9484117f846a21333. Jan 29 11:13:10.559216 containerd[1490]: time="2025-01-29T11:13:10.559085983Z" level=info msg="StartContainer for \"83d86e7f799bcfee63d152819e73d696282a5a236568bed9484117f846a21333\" returns successfully" Jan 29 11:13:10.573363 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:13:10.573707 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:13:10.573780 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:13:10.578893 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:13:10.579191 systemd[1]: cri-containerd-83d86e7f799bcfee63d152819e73d696282a5a236568bed9484117f846a21333.scope: Deactivated successfully. Jan 29 11:13:10.604526 containerd[1490]: time="2025-01-29T11:13:10.604447227Z" level=info msg="shim disconnected" id=83d86e7f799bcfee63d152819e73d696282a5a236568bed9484117f846a21333 namespace=k8s.io Jan 29 11:13:10.604526 containerd[1490]: time="2025-01-29T11:13:10.604514795Z" level=warning msg="cleaning up after shim disconnected" id=83d86e7f799bcfee63d152819e73d696282a5a236568bed9484117f846a21333 namespace=k8s.io Jan 29 11:13:10.604526 containerd[1490]: time="2025-01-29T11:13:10.604526346Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:13:10.620049 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:13:10.677897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02f0dd583b2479d42ab412258d5cb582c75708b78572b2b1e94c7d8b3832bf1c-rootfs.mount: Deactivated successfully. Jan 29 11:13:11.299605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1400483610.mount: Deactivated successfully. Jan 29 11:13:11.474899 kubelet[2617]: E0129 11:13:11.474861 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:11.478863 containerd[1490]: time="2025-01-29T11:13:11.478726430Z" level=info msg="CreateContainer within sandbox \"ce3f662e986fba138ca3ecd5441906b9f31c1c970483d8cfcc475ea2a9291ce1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:13:11.508437 containerd[1490]: time="2025-01-29T11:13:11.508357453Z" level=info msg="CreateContainer within sandbox \"ce3f662e986fba138ca3ecd5441906b9f31c1c970483d8cfcc475ea2a9291ce1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"36b3e10955bbe9ed49bf92c1b84cf336bf1990879730981e98e8012feabdcf13\"" Jan 29 11:13:11.509210 containerd[1490]: time="2025-01-29T11:13:11.509174949Z" level=info msg="StartContainer for \"36b3e10955bbe9ed49bf92c1b84cf336bf1990879730981e98e8012feabdcf13\"" Jan 29 11:13:11.551672 systemd[1]: Started cri-containerd-36b3e10955bbe9ed49bf92c1b84cf336bf1990879730981e98e8012feabdcf13.scope - libcontainer container 36b3e10955bbe9ed49bf92c1b84cf336bf1990879730981e98e8012feabdcf13. Jan 29 11:13:11.592868 systemd[1]: cri-containerd-36b3e10955bbe9ed49bf92c1b84cf336bf1990879730981e98e8012feabdcf13.scope: Deactivated successfully. Jan 29 11:13:11.650649 containerd[1490]: time="2025-01-29T11:13:11.650598831Z" level=info msg="StartContainer for \"36b3e10955bbe9ed49bf92c1b84cf336bf1990879730981e98e8012feabdcf13\" returns successfully" Jan 29 11:13:11.775920 containerd[1490]: time="2025-01-29T11:13:11.775846234Z" level=info msg="shim disconnected" id=36b3e10955bbe9ed49bf92c1b84cf336bf1990879730981e98e8012feabdcf13 namespace=k8s.io Jan 29 11:13:11.775920 containerd[1490]: time="2025-01-29T11:13:11.775906979Z" level=warning msg="cleaning up after shim disconnected" id=36b3e10955bbe9ed49bf92c1b84cf336bf1990879730981e98e8012feabdcf13 namespace=k8s.io Jan 29 11:13:11.775920 containerd[1490]: time="2025-01-29T11:13:11.775916026Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:13:11.796967 containerd[1490]: time="2025-01-29T11:13:11.796910375Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:11.797711 containerd[1490]: time="2025-01-29T11:13:11.797652609Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 11:13:11.798812 containerd[1490]: time="2025-01-29T11:13:11.798778754Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:11.800141 containerd[1490]: time="2025-01-29T11:13:11.800112821Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.137186387s" Jan 29 11:13:11.800188 containerd[1490]: time="2025-01-29T11:13:11.800141595Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 11:13:11.807015 containerd[1490]: time="2025-01-29T11:13:11.806925559Z" level=info msg="CreateContainer within sandbox \"473e395e3b39a0ab82a350a40b5593829ba9ceedb66e1f144fd23dfd7564a830\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 11:13:11.818768 containerd[1490]: time="2025-01-29T11:13:11.818727490Z" level=info msg="CreateContainer within sandbox \"473e395e3b39a0ab82a350a40b5593829ba9ceedb66e1f144fd23dfd7564a830\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"714830c1a85ee08b9caa06abec1a5bb512d58e3aae229ae23dc74567f8976b51\"" Jan 29 11:13:11.819207 containerd[1490]: time="2025-01-29T11:13:11.819178857Z" level=info msg="StartContainer for \"714830c1a85ee08b9caa06abec1a5bb512d58e3aae229ae23dc74567f8976b51\"" Jan 29 11:13:11.853601 systemd[1]: Started cri-containerd-714830c1a85ee08b9caa06abec1a5bb512d58e3aae229ae23dc74567f8976b51.scope - libcontainer container 714830c1a85ee08b9caa06abec1a5bb512d58e3aae229ae23dc74567f8976b51. Jan 29 11:13:11.881254 containerd[1490]: time="2025-01-29T11:13:11.881203364Z" level=info msg="StartContainer for \"714830c1a85ee08b9caa06abec1a5bb512d58e3aae229ae23dc74567f8976b51\" returns successfully" Jan 29 11:13:12.488853 kubelet[2617]: E0129 11:13:12.488813 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:12.488853 kubelet[2617]: E0129 11:13:12.488819 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:12.490474 containerd[1490]: time="2025-01-29T11:13:12.490426242Z" level=info msg="CreateContainer within sandbox \"ce3f662e986fba138ca3ecd5441906b9f31c1c970483d8cfcc475ea2a9291ce1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:13:12.710807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount799876705.mount: Deactivated successfully. Jan 29 11:13:12.713571 containerd[1490]: time="2025-01-29T11:13:12.713430798Z" level=info msg="CreateContainer within sandbox \"ce3f662e986fba138ca3ecd5441906b9f31c1c970483d8cfcc475ea2a9291ce1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f34c5cabcd32fe76feb454943b0232253ec3e13dfa23c6c22eb487116ec0a3da\"" Jan 29 11:13:12.715787 containerd[1490]: time="2025-01-29T11:13:12.715740366Z" level=info msg="StartContainer for \"f34c5cabcd32fe76feb454943b0232253ec3e13dfa23c6c22eb487116ec0a3da\"" Jan 29 11:13:12.778475 systemd[1]: Started cri-containerd-f34c5cabcd32fe76feb454943b0232253ec3e13dfa23c6c22eb487116ec0a3da.scope - libcontainer container f34c5cabcd32fe76feb454943b0232253ec3e13dfa23c6c22eb487116ec0a3da. Jan 29 11:13:12.836106 systemd[1]: cri-containerd-f34c5cabcd32fe76feb454943b0232253ec3e13dfa23c6c22eb487116ec0a3da.scope: Deactivated successfully. Jan 29 11:13:12.840729 containerd[1490]: time="2025-01-29T11:13:12.840696785Z" level=info msg="StartContainer for \"f34c5cabcd32fe76feb454943b0232253ec3e13dfa23c6c22eb487116ec0a3da\" returns successfully" Jan 29 11:13:12.866786 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f34c5cabcd32fe76feb454943b0232253ec3e13dfa23c6c22eb487116ec0a3da-rootfs.mount: Deactivated successfully. Jan 29 11:13:12.871884 containerd[1490]: time="2025-01-29T11:13:12.871801089Z" level=info msg="shim disconnected" id=f34c5cabcd32fe76feb454943b0232253ec3e13dfa23c6c22eb487116ec0a3da namespace=k8s.io Jan 29 11:13:12.871884 containerd[1490]: time="2025-01-29T11:13:12.871866582Z" level=warning msg="cleaning up after shim disconnected" id=f34c5cabcd32fe76feb454943b0232253ec3e13dfa23c6c22eb487116ec0a3da namespace=k8s.io Jan 29 11:13:12.871884 containerd[1490]: time="2025-01-29T11:13:12.871879176Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:13:13.491886 kubelet[2617]: E0129 11:13:13.491660 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:13.491886 kubelet[2617]: E0129 11:13:13.491838 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:13.493904 containerd[1490]: time="2025-01-29T11:13:13.493857104Z" level=info msg="CreateContainer within sandbox \"ce3f662e986fba138ca3ecd5441906b9f31c1c970483d8cfcc475ea2a9291ce1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:13:13.507737 kubelet[2617]: I0129 11:13:13.507673 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-fksqf" podStartSLOduration=4.132241593 podStartE2EDuration="20.50765618s" podCreationTimestamp="2025-01-29 11:12:53 +0000 UTC" firstStartedPulling="2025-01-29 11:12:55.425349889 +0000 UTC m=+8.098070531" lastFinishedPulling="2025-01-29 11:13:11.800764475 +0000 UTC m=+24.473485118" observedRunningTime="2025-01-29 11:13:12.817253741 +0000 UTC m=+25.489974383" watchObservedRunningTime="2025-01-29 11:13:13.50765618 +0000 UTC m=+26.180376822" Jan 29 11:13:13.513208 containerd[1490]: time="2025-01-29T11:13:13.513152062Z" level=info msg="CreateContainer within sandbox \"ce3f662e986fba138ca3ecd5441906b9f31c1c970483d8cfcc475ea2a9291ce1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bb2cd4f3891edb73e5ed55b39c5fa4fd743b0b7b99d54a3acadea35f7e655afd\"" Jan 29 11:13:13.513808 containerd[1490]: time="2025-01-29T11:13:13.513774952Z" level=info msg="StartContainer for \"bb2cd4f3891edb73e5ed55b39c5fa4fd743b0b7b99d54a3acadea35f7e655afd\"" Jan 29 11:13:13.540588 systemd[1]: Started cri-containerd-bb2cd4f3891edb73e5ed55b39c5fa4fd743b0b7b99d54a3acadea35f7e655afd.scope - libcontainer container bb2cd4f3891edb73e5ed55b39c5fa4fd743b0b7b99d54a3acadea35f7e655afd. Jan 29 11:13:13.582215 containerd[1490]: time="2025-01-29T11:13:13.582161330Z" level=info msg="StartContainer for \"bb2cd4f3891edb73e5ed55b39c5fa4fd743b0b7b99d54a3acadea35f7e655afd\" returns successfully" Jan 29 11:13:13.707986 kubelet[2617]: I0129 11:13:13.707946 2617 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 11:13:13.734050 systemd[1]: Created slice kubepods-burstable-pod5f3d0205_33ff_4fa1_a661_d1a1b4ecbe35.slice - libcontainer container kubepods-burstable-pod5f3d0205_33ff_4fa1_a661_d1a1b4ecbe35.slice. Jan 29 11:13:13.742234 systemd[1]: Created slice kubepods-burstable-pod09e7bd49_3b5c_44c0_8fa7_1cab6d583995.slice - libcontainer container kubepods-burstable-pod09e7bd49_3b5c_44c0_8fa7_1cab6d583995.slice. Jan 29 11:13:13.865281 kubelet[2617]: I0129 11:13:13.865062 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09e7bd49-3b5c-44c0-8fa7-1cab6d583995-config-volume\") pod \"coredns-6f6b679f8f-2vsj7\" (UID: \"09e7bd49-3b5c-44c0-8fa7-1cab6d583995\") " pod="kube-system/coredns-6f6b679f8f-2vsj7" Jan 29 11:13:13.865281 kubelet[2617]: I0129 11:13:13.865129 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dmwh\" (UniqueName: \"kubernetes.io/projected/09e7bd49-3b5c-44c0-8fa7-1cab6d583995-kube-api-access-5dmwh\") pod \"coredns-6f6b679f8f-2vsj7\" (UID: \"09e7bd49-3b5c-44c0-8fa7-1cab6d583995\") " pod="kube-system/coredns-6f6b679f8f-2vsj7" Jan 29 11:13:13.865281 kubelet[2617]: I0129 11:13:13.865162 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f3d0205-33ff-4fa1-a661-d1a1b4ecbe35-config-volume\") pod \"coredns-6f6b679f8f-m9rvt\" (UID: \"5f3d0205-33ff-4fa1-a661-d1a1b4ecbe35\") " pod="kube-system/coredns-6f6b679f8f-m9rvt" Jan 29 11:13:13.865281 kubelet[2617]: I0129 11:13:13.865204 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7v7d\" (UniqueName: \"kubernetes.io/projected/5f3d0205-33ff-4fa1-a661-d1a1b4ecbe35-kube-api-access-j7v7d\") pod \"coredns-6f6b679f8f-m9rvt\" (UID: \"5f3d0205-33ff-4fa1-a661-d1a1b4ecbe35\") " pod="kube-system/coredns-6f6b679f8f-m9rvt" Jan 29 11:13:14.040695 kubelet[2617]: E0129 11:13:14.040573 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:14.041510 containerd[1490]: time="2025-01-29T11:13:14.041449892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-m9rvt,Uid:5f3d0205-33ff-4fa1-a661-d1a1b4ecbe35,Namespace:kube-system,Attempt:0,}" Jan 29 11:13:14.047796 kubelet[2617]: E0129 11:13:14.047724 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:14.048295 containerd[1490]: time="2025-01-29T11:13:14.048240023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2vsj7,Uid:09e7bd49-3b5c-44c0-8fa7-1cab6d583995,Namespace:kube-system,Attempt:0,}" Jan 29 11:13:14.499871 kubelet[2617]: E0129 11:13:14.499815 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:15.501517 kubelet[2617]: E0129 11:13:15.501473 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:16.040456 systemd-networkd[1416]: cilium_host: Link UP Jan 29 11:13:16.040721 systemd-networkd[1416]: cilium_net: Link UP Jan 29 11:13:16.040997 systemd-networkd[1416]: cilium_net: Gained carrier Jan 29 11:13:16.041258 systemd-networkd[1416]: cilium_host: Gained carrier Jan 29 11:13:16.135592 systemd-networkd[1416]: cilium_host: Gained IPv6LL Jan 29 11:13:16.159987 systemd-networkd[1416]: cilium_vxlan: Link UP Jan 29 11:13:16.160003 systemd-networkd[1416]: cilium_vxlan: Gained carrier Jan 29 11:13:16.390557 systemd-networkd[1416]: cilium_net: Gained IPv6LL Jan 29 11:13:16.395450 kernel: NET: Registered PF_ALG protocol family Jan 29 11:13:16.502868 kubelet[2617]: E0129 11:13:16.502830 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:16.674669 systemd[1]: Started sshd@8-10.0.0.80:22-10.0.0.1:52192.service - OpenSSH per-connection server daemon (10.0.0.1:52192). Jan 29 11:13:16.753262 sshd[3581]: Accepted publickey for core from 10.0.0.1 port 52192 ssh2: RSA SHA256:lkD44HqWGay7B41vMEzuR4B7mrONU8O33s9NYSBbVFY Jan 29 11:13:16.755711 sshd-session[3581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:16.763548 systemd-logind[1472]: New session 8 of user core. Jan 29 11:13:16.771599 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:13:16.913767 sshd[3661]: Connection closed by 10.0.0.1 port 52192 Jan 29 11:13:16.914209 sshd-session[3581]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:16.919032 systemd[1]: sshd@8-10.0.0.80:22-10.0.0.1:52192.service: Deactivated successfully. Jan 29 11:13:16.921999 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:13:16.925239 systemd-logind[1472]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:13:16.926388 systemd-logind[1472]: Removed session 8. Jan 29 11:13:17.189798 systemd-networkd[1416]: lxc_health: Link UP Jan 29 11:13:17.203533 systemd-networkd[1416]: lxc_health: Gained carrier Jan 29 11:13:17.260441 kubelet[2617]: I0129 11:13:17.260129 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kpfn4" podStartSLOduration=9.00503605 podStartE2EDuration="23.260111941s" podCreationTimestamp="2025-01-29 11:12:54 +0000 UTC" firstStartedPulling="2025-01-29 11:12:55.407658662 +0000 UTC m=+8.080379304" lastFinishedPulling="2025-01-29 11:13:09.662734553 +0000 UTC m=+22.335455195" observedRunningTime="2025-01-29 11:13:14.600619441 +0000 UTC m=+27.273340103" watchObservedRunningTime="2025-01-29 11:13:17.260111941 +0000 UTC m=+29.932832583" Jan 29 11:13:17.301294 systemd-networkd[1416]: lxce000109a14d1: Link UP Jan 29 11:13:17.312459 kernel: eth0: renamed from tmpd0175 Jan 29 11:13:17.317033 systemd-networkd[1416]: lxce000109a14d1: Gained carrier Jan 29 11:13:17.336661 systemd-networkd[1416]: lxcdd353a916f4a: Link UP Jan 29 11:13:17.345447 kernel: eth0: renamed from tmp6af53 Jan 29 11:13:17.349832 systemd-networkd[1416]: lxcdd353a916f4a: Gained carrier Jan 29 11:13:17.463653 systemd-networkd[1416]: cilium_vxlan: Gained IPv6LL Jan 29 11:13:17.505176 kubelet[2617]: E0129 11:13:17.505133 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:18.232568 systemd-networkd[1416]: lxc_health: Gained IPv6LL Jan 29 11:13:18.422615 systemd-networkd[1416]: lxcdd353a916f4a: Gained IPv6LL Jan 29 11:13:18.704566 kubelet[2617]: I0129 11:13:18.704502 2617 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:13:18.705626 kubelet[2617]: E0129 11:13:18.705101 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:18.934863 systemd-networkd[1416]: lxce000109a14d1: Gained IPv6LL Jan 29 11:13:19.508185 kubelet[2617]: E0129 11:13:19.508140 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:21.123280 containerd[1490]: time="2025-01-29T11:13:21.123068956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:13:21.123280 containerd[1490]: time="2025-01-29T11:13:21.123134429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:13:21.123280 containerd[1490]: time="2025-01-29T11:13:21.123149307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:21.123280 containerd[1490]: time="2025-01-29T11:13:21.123229607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:21.123903 containerd[1490]: time="2025-01-29T11:13:21.123382895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:13:21.123903 containerd[1490]: time="2025-01-29T11:13:21.123459188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:13:21.123903 containerd[1490]: time="2025-01-29T11:13:21.123473705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:21.123903 containerd[1490]: time="2025-01-29T11:13:21.123551271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:21.143462 systemd[1]: run-containerd-runc-k8s.io-d0175b45485d5212e411a01f1ddd669920b0e83d69b60ce782cb6223af77a9f2-runc.zIgEwl.mount: Deactivated successfully. Jan 29 11:13:21.163557 systemd[1]: Started cri-containerd-6af53b78ed670ddc09e79470656ee0265f80c9ff5798746b9d27d95a7513a827.scope - libcontainer container 6af53b78ed670ddc09e79470656ee0265f80c9ff5798746b9d27d95a7513a827. Jan 29 11:13:21.165053 systemd[1]: Started cri-containerd-d0175b45485d5212e411a01f1ddd669920b0e83d69b60ce782cb6223af77a9f2.scope - libcontainer container d0175b45485d5212e411a01f1ddd669920b0e83d69b60ce782cb6223af77a9f2. Jan 29 11:13:21.180820 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:13:21.182108 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:13:21.208055 containerd[1490]: time="2025-01-29T11:13:21.208017372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2vsj7,Uid:09e7bd49-3b5c-44c0-8fa7-1cab6d583995,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0175b45485d5212e411a01f1ddd669920b0e83d69b60ce782cb6223af77a9f2\"" Jan 29 11:13:21.208975 kubelet[2617]: E0129 11:13:21.208784 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:21.211220 containerd[1490]: time="2025-01-29T11:13:21.211108694Z" level=info msg="CreateContainer within sandbox \"d0175b45485d5212e411a01f1ddd669920b0e83d69b60ce782cb6223af77a9f2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:13:21.214837 containerd[1490]: time="2025-01-29T11:13:21.214785746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-m9rvt,Uid:5f3d0205-33ff-4fa1-a661-d1a1b4ecbe35,Namespace:kube-system,Attempt:0,} returns sandbox id \"6af53b78ed670ddc09e79470656ee0265f80c9ff5798746b9d27d95a7513a827\"" Jan 29 11:13:21.216238 kubelet[2617]: E0129 11:13:21.216205 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:21.217768 containerd[1490]: time="2025-01-29T11:13:21.217719732Z" level=info msg="CreateContainer within sandbox \"6af53b78ed670ddc09e79470656ee0265f80c9ff5798746b9d27d95a7513a827\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:13:21.231213 containerd[1490]: time="2025-01-29T11:13:21.231184874Z" level=info msg="CreateContainer within sandbox \"d0175b45485d5212e411a01f1ddd669920b0e83d69b60ce782cb6223af77a9f2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"791e20cc89487ebe0251bb45e2f87e77061d1de9af918c028a6df017ff43e6b3\"" Jan 29 11:13:21.232000 containerd[1490]: time="2025-01-29T11:13:21.231549639Z" level=info msg="StartContainer for \"791e20cc89487ebe0251bb45e2f87e77061d1de9af918c028a6df017ff43e6b3\"" Jan 29 11:13:21.238643 containerd[1490]: time="2025-01-29T11:13:21.238602486Z" level=info msg="CreateContainer within sandbox \"6af53b78ed670ddc09e79470656ee0265f80c9ff5798746b9d27d95a7513a827\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ea5efbc64a70b479649b112de121bfc35e66701cd55d50e9deca9da2b0e130d3\"" Jan 29 11:13:21.240444 containerd[1490]: time="2025-01-29T11:13:21.239330102Z" level=info msg="StartContainer for \"ea5efbc64a70b479649b112de121bfc35e66701cd55d50e9deca9da2b0e130d3\"" Jan 29 11:13:21.260639 systemd[1]: Started cri-containerd-791e20cc89487ebe0251bb45e2f87e77061d1de9af918c028a6df017ff43e6b3.scope - libcontainer container 791e20cc89487ebe0251bb45e2f87e77061d1de9af918c028a6df017ff43e6b3. Jan 29 11:13:21.277586 systemd[1]: Started cri-containerd-ea5efbc64a70b479649b112de121bfc35e66701cd55d50e9deca9da2b0e130d3.scope - libcontainer container ea5efbc64a70b479649b112de121bfc35e66701cd55d50e9deca9da2b0e130d3. Jan 29 11:13:21.299644 containerd[1490]: time="2025-01-29T11:13:21.299597535Z" level=info msg="StartContainer for \"791e20cc89487ebe0251bb45e2f87e77061d1de9af918c028a6df017ff43e6b3\" returns successfully" Jan 29 11:13:21.307697 containerd[1490]: time="2025-01-29T11:13:21.307647855Z" level=info msg="StartContainer for \"ea5efbc64a70b479649b112de121bfc35e66701cd55d50e9deca9da2b0e130d3\" returns successfully" Jan 29 11:13:21.512359 kubelet[2617]: E0129 11:13:21.512331 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:21.514988 kubelet[2617]: E0129 11:13:21.514308 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:21.521332 kubelet[2617]: I0129 11:13:21.521158 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-m9rvt" podStartSLOduration=28.521141442 podStartE2EDuration="28.521141442s" podCreationTimestamp="2025-01-29 11:12:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:13:21.520694054 +0000 UTC m=+34.193414696" watchObservedRunningTime="2025-01-29 11:13:21.521141442 +0000 UTC m=+34.193862085" Jan 29 11:13:21.545552 kubelet[2617]: I0129 11:13:21.544709 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-2vsj7" podStartSLOduration=28.54468496 podStartE2EDuration="28.54468496s" podCreationTimestamp="2025-01-29 11:12:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:13:21.544554586 +0000 UTC m=+34.217275238" watchObservedRunningTime="2025-01-29 11:13:21.54468496 +0000 UTC m=+34.217405603" Jan 29 11:13:21.929534 systemd[1]: Started sshd@9-10.0.0.80:22-10.0.0.1:42800.service - OpenSSH per-connection server daemon (10.0.0.1:42800). Jan 29 11:13:21.989075 sshd[4013]: Accepted publickey for core from 10.0.0.1 port 42800 ssh2: RSA SHA256:lkD44HqWGay7B41vMEzuR4B7mrONU8O33s9NYSBbVFY Jan 29 11:13:21.991178 sshd-session[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:21.997146 systemd-logind[1472]: New session 9 of user core. Jan 29 11:13:22.006621 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:13:22.127228 sshd[4015]: Connection closed by 10.0.0.1 port 42800 Jan 29 11:13:22.128091 sshd-session[4013]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:22.132282 systemd[1]: sshd@9-10.0.0.80:22-10.0.0.1:42800.service: Deactivated successfully. Jan 29 11:13:22.134662 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:13:22.136952 systemd-logind[1472]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:13:22.138110 systemd-logind[1472]: Removed session 9. Jan 29 11:13:22.515830 kubelet[2617]: E0129 11:13:22.515781 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:22.517023 kubelet[2617]: E0129 11:13:22.515861 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:23.518325 kubelet[2617]: E0129 11:13:23.518267 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:23.518325 kubelet[2617]: E0129 11:13:23.518265 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:27.143104 systemd[1]: Started sshd@10-10.0.0.80:22-10.0.0.1:42804.service - OpenSSH per-connection server daemon (10.0.0.1:42804). Jan 29 11:13:27.188163 sshd[4030]: Accepted publickey for core from 10.0.0.1 port 42804 ssh2: RSA SHA256:lkD44HqWGay7B41vMEzuR4B7mrONU8O33s9NYSBbVFY Jan 29 11:13:27.190340 sshd-session[4030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:27.195313 systemd-logind[1472]: New session 10 of user core. Jan 29 11:13:27.208594 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:13:27.345635 sshd[4032]: Connection closed by 10.0.0.1 port 42804 Jan 29 11:13:27.346070 sshd-session[4030]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:27.350300 systemd[1]: sshd@10-10.0.0.80:22-10.0.0.1:42804.service: Deactivated successfully. Jan 29 11:13:27.352650 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:13:27.353275 systemd-logind[1472]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:13:27.354217 systemd-logind[1472]: Removed session 10. Jan 29 11:13:32.359485 systemd[1]: Started sshd@11-10.0.0.80:22-10.0.0.1:55856.service - OpenSSH per-connection server daemon (10.0.0.1:55856). Jan 29 11:13:32.411033 sshd[4046]: Accepted publickey for core from 10.0.0.1 port 55856 ssh2: RSA SHA256:lkD44HqWGay7B41vMEzuR4B7mrONU8O33s9NYSBbVFY Jan 29 11:13:32.412939 sshd-session[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:32.417199 systemd-logind[1472]: New session 11 of user core. Jan 29 11:13:32.424533 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:13:32.553857 sshd[4048]: Connection closed by 10.0.0.1 port 55856 Jan 29 11:13:32.554437 sshd-session[4046]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:32.563098 systemd[1]: sshd@11-10.0.0.80:22-10.0.0.1:55856.service: Deactivated successfully. Jan 29 11:13:32.565975 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:13:32.567919 systemd-logind[1472]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:13:32.577807 systemd[1]: Started sshd@12-10.0.0.80:22-10.0.0.1:55864.service - OpenSSH per-connection server daemon (10.0.0.1:55864). Jan 29 11:13:32.578967 systemd-logind[1472]: Removed session 11. Jan 29 11:13:32.617186 sshd[4062]: Accepted publickey for core from 10.0.0.1 port 55864 ssh2: RSA SHA256:lkD44HqWGay7B41vMEzuR4B7mrONU8O33s9NYSBbVFY Jan 29 11:13:32.618874 sshd-session[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:32.622979 systemd-logind[1472]: New session 12 of user core. Jan 29 11:13:32.632541 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:13:32.900233 sshd[4064]: Connection closed by 10.0.0.1 port 55864 Jan 29 11:13:32.901891 sshd-session[4062]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:32.908961 systemd[1]: sshd@12-10.0.0.80:22-10.0.0.1:55864.service: Deactivated successfully. Jan 29 11:13:32.911481 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:13:32.913310 systemd-logind[1472]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:13:32.925213 systemd[1]: Started sshd@13-10.0.0.80:22-10.0.0.1:55876.service - OpenSSH per-connection server daemon (10.0.0.1:55876). Jan 29 11:13:32.926340 systemd-logind[1472]: Removed session 12. Jan 29 11:13:32.982845 sshd[4075]: Accepted publickey for core from 10.0.0.1 port 55876 ssh2: RSA SHA256:lkD44HqWGay7B41vMEzuR4B7mrONU8O33s9NYSBbVFY Jan 29 11:13:32.985037 sshd-session[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:33.004210 systemd-logind[1472]: New session 13 of user core. Jan 29 11:13:33.019927 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:13:33.298874 sshd[4078]: Connection closed by 10.0.0.1 port 55876 Jan 29 11:13:33.299309 sshd-session[4075]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:33.304164 systemd[1]: sshd@13-10.0.0.80:22-10.0.0.1:55876.service: Deactivated successfully. Jan 29 11:13:33.306721 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:13:33.307457 systemd-logind[1472]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:13:33.308426 systemd-logind[1472]: Removed session 13. Jan 29 11:13:38.310862 systemd[1]: Started sshd@14-10.0.0.80:22-10.0.0.1:55886.service - OpenSSH per-connection server daemon (10.0.0.1:55886). Jan 29 11:13:38.353051 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 55886 ssh2: RSA SHA256:lkD44HqWGay7B41vMEzuR4B7mrONU8O33s9NYSBbVFY Jan 29 11:13:38.354710 sshd-session[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:38.358947 systemd-logind[1472]: New session 14 of user core. Jan 29 11:13:38.368571 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:13:38.486782 sshd[4093]: Connection closed by 10.0.0.1 port 55886 Jan 29 11:13:38.487221 sshd-session[4091]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:38.492271 systemd[1]: sshd@14-10.0.0.80:22-10.0.0.1:55886.service: Deactivated successfully. Jan 29 11:13:38.494664 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:13:38.495369 systemd-logind[1472]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:13:38.496347 systemd-logind[1472]: Removed session 14. Jan 29 11:13:43.506034 systemd[1]: Started sshd@15-10.0.0.80:22-10.0.0.1:39082.service - OpenSSH per-connection server daemon (10.0.0.1:39082). Jan 29 11:13:43.552375 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 39082 ssh2: RSA SHA256:lkD44HqWGay7B41vMEzuR4B7mrONU8O33s9NYSBbVFY Jan 29 11:13:43.554428 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:43.559982 systemd-logind[1472]: New session 15 of user core. Jan 29 11:13:43.574743 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:13:43.693831 sshd[4107]: Connection closed by 10.0.0.1 port 39082 Jan 29 11:13:43.694247 sshd-session[4105]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:43.705374 systemd[1]: sshd@15-10.0.0.80:22-10.0.0.1:39082.service: Deactivated successfully. Jan 29 11:13:43.707736 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:13:43.709600 systemd-logind[1472]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:13:43.720820 systemd[1]: Started sshd@16-10.0.0.80:22-10.0.0.1:39096.service - OpenSSH per-connection server daemon (10.0.0.1:39096). Jan 29 11:13:43.722000 systemd-logind[1472]: Removed session 15. Jan 29 11:13:43.757049 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 39096 ssh2: RSA SHA256:lkD44HqWGay7B41vMEzuR4B7mrONU8O33s9NYSBbVFY Jan 29 11:13:43.758428 sshd-session[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:43.762604 systemd-logind[1472]: New session 16 of user core. Jan 29 11:13:43.772553 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:13:44.005978 sshd[4122]: Connection closed by 10.0.0.1 port 39096 Jan 29 11:13:44.007362 sshd-session[4120]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:44.015600 systemd[1]: sshd@16-10.0.0.80:22-10.0.0.1:39096.service: Deactivated successfully. Jan 29 11:13:44.017886 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:13:44.019749 systemd-logind[1472]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:13:44.028850 systemd[1]: Started sshd@17-10.0.0.80:22-10.0.0.1:39112.service - OpenSSH per-connection server daemon (10.0.0.1:39112). Jan 29 11:13:44.029911 systemd-logind[1472]: Removed session 16. Jan 29 11:13:44.071464 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 39112 ssh2: RSA SHA256:lkD44HqWGay7B41vMEzuR4B7mrONU8O33s9NYSBbVFY Jan 29 11:13:44.073253 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:44.077487 systemd-logind[1472]: New session 17 of user core. Jan 29 11:13:44.086562 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:13:45.710756 sshd[4134]: Connection closed by 10.0.0.1 port 39112 Jan 29 11:13:45.712216 sshd-session[4132]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:45.721178 systemd[1]: sshd@17-10.0.0.80:22-10.0.0.1:39112.service: Deactivated successfully. Jan 29 11:13:45.723232 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:13:45.725244 systemd-logind[1472]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:13:45.734882 systemd[1]: Started sshd@18-10.0.0.80:22-10.0.0.1:39128.service - OpenSSH per-connection server daemon (10.0.0.1:39128). Jan 29 11:13:45.738760 systemd-logind[1472]: Removed session 17. Jan 29 11:13:45.778941 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 39128 ssh2: RSA SHA256:lkD44HqWGay7B41vMEzuR4B7mrONU8O33s9NYSBbVFY Jan 29 11:13:45.780599 sshd-session[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:45.784809 systemd-logind[1472]: New session 18 of user core. Jan 29 11:13:45.796565 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:13:46.037840 sshd[4157]: Connection closed by 10.0.0.1 port 39128 Jan 29 11:13:46.038242 sshd-session[4155]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:46.047603 systemd[1]: sshd@18-10.0.0.80:22-10.0.0.1:39128.service: Deactivated successfully. Jan 29 11:13:46.053292 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:13:46.055903 systemd-logind[1472]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:13:46.064785 systemd[1]: Started sshd@19-10.0.0.80:22-10.0.0.1:39132.service - OpenSSH per-connection server daemon (10.0.0.1:39132). Jan 29 11:13:46.065618 systemd-logind[1472]: Removed session 18. Jan 29 11:13:46.107900 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 39132 ssh2: RSA SHA256:lkD44HqWGay7B41vMEzuR4B7mrONU8O33s9NYSBbVFY Jan 29 11:13:46.109525 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:46.116598 systemd-logind[1472]: New session 19 of user core. Jan 29 11:13:46.122551 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:13:46.230590 sshd[4170]: Connection closed by 10.0.0.1 port 39132 Jan 29 11:13:46.230982 sshd-session[4168]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:46.235265 systemd[1]: sshd@19-10.0.0.80:22-10.0.0.1:39132.service: Deactivated successfully. Jan 29 11:13:46.237787 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:13:46.238692 systemd-logind[1472]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:13:46.239557 systemd-logind[1472]: Removed session 19. Jan 29 11:13:51.244342 systemd[1]: Started sshd@20-10.0.0.80:22-10.0.0.1:46442.service - OpenSSH per-connection server daemon (10.0.0.1:46442). Jan 29 11:13:51.287730 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 46442 ssh2: RSA SHA256:lkD44HqWGay7B41vMEzuR4B7mrONU8O33s9NYSBbVFY Jan 29 11:13:51.289609 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:51.294528 systemd-logind[1472]: New session 20 of user core. Jan 29 11:13:51.304696 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:13:51.434654 sshd[4187]: Connection closed by 10.0.0.1 port 46442 Jan 29 11:13:51.435163 sshd-session[4185]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:51.440597 systemd[1]: sshd@20-10.0.0.80:22-10.0.0.1:46442.service: Deactivated successfully. Jan 29 11:13:51.444034 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:13:51.445257 systemd-logind[1472]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:13:51.446554 systemd-logind[1472]: Removed session 20. Jan 29 11:13:56.448301 systemd[1]: Started sshd@21-10.0.0.80:22-10.0.0.1:46456.service - OpenSSH per-connection server daemon (10.0.0.1:46456). Jan 29 11:13:56.489210 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 46456 ssh2: RSA SHA256:lkD44HqWGay7B41vMEzuR4B7mrONU8O33s9NYSBbVFY Jan 29 11:13:56.490685 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:56.494808 systemd-logind[1472]: New session 21 of user core. Jan 29 11:13:56.504581 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:13:56.612764 sshd[4207]: Connection closed by 10.0.0.1 port 46456 Jan 29 11:13:56.613134 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:56.616840 systemd[1]: sshd@21-10.0.0.80:22-10.0.0.1:46456.service: Deactivated successfully. Jan 29 11:13:56.619188 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:13:56.619909 systemd-logind[1472]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:13:56.620809 systemd-logind[1472]: Removed session 21. Jan 29 11:14:01.625580 systemd[1]: Started sshd@22-10.0.0.80:22-10.0.0.1:57416.service - OpenSSH per-connection server daemon (10.0.0.1:57416). Jan 29 11:14:01.667803 sshd[4219]: Accepted publickey for core from 10.0.0.1 port 57416 ssh2: RSA SHA256:lkD44HqWGay7B41vMEzuR4B7mrONU8O33s9NYSBbVFY Jan 29 11:14:01.669460 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:01.673791 systemd-logind[1472]: New session 22 of user core. Jan 29 11:14:01.687581 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:14:01.795987 sshd[4221]: Connection closed by 10.0.0.1 port 57416 Jan 29 11:14:01.796362 sshd-session[4219]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:01.800758 systemd[1]: sshd@22-10.0.0.80:22-10.0.0.1:57416.service: Deactivated successfully. Jan 29 11:14:01.803244 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:14:01.803953 systemd-logind[1472]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:14:01.804894 systemd-logind[1472]: Removed session 22. Jan 29 11:14:06.417188 kubelet[2617]: E0129 11:14:06.417132 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:06.818105 systemd[1]: Started sshd@23-10.0.0.80:22-10.0.0.1:57420.service - OpenSSH per-connection server daemon (10.0.0.1:57420). Jan 29 11:14:06.862799 sshd[4234]: Accepted publickey for core from 10.0.0.1 port 57420 ssh2: RSA SHA256:lkD44HqWGay7B41vMEzuR4B7mrONU8O33s9NYSBbVFY Jan 29 11:14:06.865075 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:06.870288 systemd-logind[1472]: New session 23 of user core. Jan 29 11:14:06.876613 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:14:06.986372 sshd[4236]: Connection closed by 10.0.0.1 port 57420 Jan 29 11:14:06.986922 sshd-session[4234]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:07.004120 systemd[1]: sshd@23-10.0.0.80:22-10.0.0.1:57420.service: Deactivated successfully. Jan 29 11:14:07.006672 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:14:07.008651 systemd-logind[1472]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:14:07.013795 systemd[1]: Started sshd@24-10.0.0.80:22-10.0.0.1:57428.service - OpenSSH per-connection server daemon (10.0.0.1:57428). Jan 29 11:14:07.015006 systemd-logind[1472]: Removed session 23. Jan 29 11:14:07.052915 sshd[4249]: Accepted publickey for core from 10.0.0.1 port 57428 ssh2: RSA SHA256:lkD44HqWGay7B41vMEzuR4B7mrONU8O33s9NYSBbVFY Jan 29 11:14:07.054686 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:07.059035 systemd-logind[1472]: New session 24 of user core. Jan 29 11:14:07.074616 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 11:14:08.416790 kubelet[2617]: E0129 11:14:08.416734 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:08.416790 kubelet[2617]: E0129 11:14:08.416784 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:08.483964 containerd[1490]: time="2025-01-29T11:14:08.483906099Z" level=info msg="StopContainer for \"714830c1a85ee08b9caa06abec1a5bb512d58e3aae229ae23dc74567f8976b51\" with timeout 30 (s)" Jan 29 11:14:08.490504 containerd[1490]: time="2025-01-29T11:14:08.490476565Z" level=info msg="Stop container \"714830c1a85ee08b9caa06abec1a5bb512d58e3aae229ae23dc74567f8976b51\" with signal terminated" Jan 29 11:14:08.505358 systemd[1]: cri-containerd-714830c1a85ee08b9caa06abec1a5bb512d58e3aae229ae23dc74567f8976b51.scope: Deactivated successfully. Jan 29 11:14:08.511915 containerd[1490]: time="2025-01-29T11:14:08.511859122Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:14:08.520176 containerd[1490]: time="2025-01-29T11:14:08.520104554Z" level=info msg="StopContainer for \"bb2cd4f3891edb73e5ed55b39c5fa4fd743b0b7b99d54a3acadea35f7e655afd\" with timeout 2 (s)" Jan 29 11:14:08.520375 containerd[1490]: time="2025-01-29T11:14:08.520349870Z" level=info msg="Stop container \"bb2cd4f3891edb73e5ed55b39c5fa4fd743b0b7b99d54a3acadea35f7e655afd\" with signal terminated" Jan 29 11:14:08.530497 systemd-networkd[1416]: lxc_health: Link DOWN Jan 29 11:14:08.530511 systemd-networkd[1416]: lxc_health: Lost carrier Jan 29 11:14:08.533865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-714830c1a85ee08b9caa06abec1a5bb512d58e3aae229ae23dc74567f8976b51-rootfs.mount: Deactivated successfully. Jan 29 11:14:08.544592 containerd[1490]: time="2025-01-29T11:14:08.544509125Z" level=info msg="shim disconnected" id=714830c1a85ee08b9caa06abec1a5bb512d58e3aae229ae23dc74567f8976b51 namespace=k8s.io Jan 29 11:14:08.544592 containerd[1490]: time="2025-01-29T11:14:08.544588676Z" level=warning msg="cleaning up after shim disconnected" id=714830c1a85ee08b9caa06abec1a5bb512d58e3aae229ae23dc74567f8976b51 namespace=k8s.io Jan 29 11:14:08.544896 containerd[1490]: time="2025-01-29T11:14:08.544605047Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:14:08.556713 systemd[1]: cri-containerd-bb2cd4f3891edb73e5ed55b39c5fa4fd743b0b7b99d54a3acadea35f7e655afd.scope: Deactivated successfully. Jan 29 11:14:08.557052 systemd[1]: cri-containerd-bb2cd4f3891edb73e5ed55b39c5fa4fd743b0b7b99d54a3acadea35f7e655afd.scope: Consumed 7.585s CPU time. Jan 29 11:14:08.571268 containerd[1490]: time="2025-01-29T11:14:08.571221043Z" level=info msg="StopContainer for \"714830c1a85ee08b9caa06abec1a5bb512d58e3aae229ae23dc74567f8976b51\" returns successfully" Jan 29 11:14:08.575502 containerd[1490]: time="2025-01-29T11:14:08.575434978Z" level=info msg="StopPodSandbox for \"473e395e3b39a0ab82a350a40b5593829ba9ceedb66e1f144fd23dfd7564a830\"" Jan 29 11:14:08.580284 containerd[1490]: time="2025-01-29T11:14:08.575488880Z" level=info msg="Container to stop \"714830c1a85ee08b9caa06abec1a5bb512d58e3aae229ae23dc74567f8976b51\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:14:08.579830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb2cd4f3891edb73e5ed55b39c5fa4fd743b0b7b99d54a3acadea35f7e655afd-rootfs.mount: Deactivated successfully. Jan 29 11:14:08.584650 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-473e395e3b39a0ab82a350a40b5593829ba9ceedb66e1f144fd23dfd7564a830-shm.mount: Deactivated successfully. Jan 29 11:14:08.590045 systemd[1]: cri-containerd-473e395e3b39a0ab82a350a40b5593829ba9ceedb66e1f144fd23dfd7564a830.scope: Deactivated successfully. Jan 29 11:14:08.611438 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-473e395e3b39a0ab82a350a40b5593829ba9ceedb66e1f144fd23dfd7564a830-rootfs.mount: Deactivated successfully. Jan 29 11:14:08.668051 containerd[1490]: time="2025-01-29T11:14:08.667887828Z" level=info msg="shim disconnected" id=bb2cd4f3891edb73e5ed55b39c5fa4fd743b0b7b99d54a3acadea35f7e655afd namespace=k8s.io Jan 29 11:14:08.668051 containerd[1490]: time="2025-01-29T11:14:08.667951950Z" level=warning msg="cleaning up after shim disconnected" id=bb2cd4f3891edb73e5ed55b39c5fa4fd743b0b7b99d54a3acadea35f7e655afd namespace=k8s.io Jan 29 11:14:08.668051 containerd[1490]: time="2025-01-29T11:14:08.667962199Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:14:08.743665 containerd[1490]: time="2025-01-29T11:14:08.743367879Z" level=info msg="shim disconnected" id=473e395e3b39a0ab82a350a40b5593829ba9ceedb66e1f144fd23dfd7564a830 namespace=k8s.io Jan 29 11:14:08.743665 containerd[1490]: time="2025-01-29T11:14:08.743489420Z" level=warning msg="cleaning up after shim disconnected" id=473e395e3b39a0ab82a350a40b5593829ba9ceedb66e1f144fd23dfd7564a830 namespace=k8s.io Jan 29 11:14:08.743665 containerd[1490]: time="2025-01-29T11:14:08.743508055Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:14:08.759041 containerd[1490]: time="2025-01-29T11:14:08.758989696Z" level=info msg="TearDown network for sandbox \"473e395e3b39a0ab82a350a40b5593829ba9ceedb66e1f144fd23dfd7564a830\" successfully" Jan 29 11:14:08.759041 containerd[1490]: time="2025-01-29T11:14:08.759027047Z" level=info msg="StopPodSandbox for \"473e395e3b39a0ab82a350a40b5593829ba9ceedb66e1f144fd23dfd7564a830\" returns successfully" Jan 29 11:14:08.764552 containerd[1490]: time="2025-01-29T11:14:08.764497767Z" level=info msg="StopContainer for \"bb2cd4f3891edb73e5ed55b39c5fa4fd743b0b7b99d54a3acadea35f7e655afd\" returns successfully" Jan 29 11:14:08.764827 containerd[1490]: time="2025-01-29T11:14:08.764798236Z" level=info msg="StopPodSandbox for \"ce3f662e986fba138ca3ecd5441906b9f31c1c970483d8cfcc475ea2a9291ce1\"" Jan 29 11:14:08.764914 containerd[1490]: time="2025-01-29T11:14:08.764862789Z" level=info msg="Container to stop \"f34c5cabcd32fe76feb454943b0232253ec3e13dfa23c6c22eb487116ec0a3da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:14:08.764914 containerd[1490]: time="2025-01-29T11:14:08.764908486Z" level=info msg="Container to stop \"bb2cd4f3891edb73e5ed55b39c5fa4fd743b0b7b99d54a3acadea35f7e655afd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:14:08.765003 containerd[1490]: time="2025-01-29T11:14:08.764919697Z" level=info msg="Container to stop \"02f0dd583b2479d42ab412258d5cb582c75708b78572b2b1e94c7d8b3832bf1c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:14:08.765003 containerd[1490]: time="2025-01-29T11:14:08.764935136Z" level=info msg="Container to stop \"83d86e7f799bcfee63d152819e73d696282a5a236568bed9484117f846a21333\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:14:08.765003 containerd[1490]: time="2025-01-29T11:14:08.764945715Z" level=info msg="Container to stop \"36b3e10955bbe9ed49bf92c1b84cf336bf1990879730981e98e8012feabdcf13\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:14:08.773133 systemd[1]: cri-containerd-ce3f662e986fba138ca3ecd5441906b9f31c1c970483d8cfcc475ea2a9291ce1.scope: Deactivated successfully. Jan 29 11:14:08.797667 containerd[1490]: time="2025-01-29T11:14:08.797569650Z" level=info msg="shim disconnected" id=ce3f662e986fba138ca3ecd5441906b9f31c1c970483d8cfcc475ea2a9291ce1 namespace=k8s.io Jan 29 11:14:08.797667 containerd[1490]: time="2025-01-29T11:14:08.797639612Z" level=warning msg="cleaning up after shim disconnected" id=ce3f662e986fba138ca3ecd5441906b9f31c1c970483d8cfcc475ea2a9291ce1 namespace=k8s.io Jan 29 11:14:08.797667 containerd[1490]: time="2025-01-29T11:14:08.797650323Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:14:08.815499 containerd[1490]: time="2025-01-29T11:14:08.815444411Z" level=info msg="TearDown network for sandbox \"ce3f662e986fba138ca3ecd5441906b9f31c1c970483d8cfcc475ea2a9291ce1\" successfully" Jan 29 11:14:08.815705 containerd[1490]: time="2025-01-29T11:14:08.815676321Z" level=info msg="StopPodSandbox for \"ce3f662e986fba138ca3ecd5441906b9f31c1c970483d8cfcc475ea2a9291ce1\" returns successfully" Jan 29 11:14:08.885898 kubelet[2617]: I0129 11:14:08.885816 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c30ee7af-e066-4598-948f-65ad0b932440-cilium-config-path\") pod \"c30ee7af-e066-4598-948f-65ad0b932440\" (UID: \"c30ee7af-e066-4598-948f-65ad0b932440\") " Jan 29 11:14:08.885898 kubelet[2617]: I0129 11:14:08.885873 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-etc-cni-netd\") pod \"dcbdd151-2455-4b84-934e-c7d50eedc46e\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " Jan 29 11:14:08.885898 kubelet[2617]: I0129 11:14:08.885893 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dcbdd151-2455-4b84-934e-c7d50eedc46e-cilium-config-path\") pod \"dcbdd151-2455-4b84-934e-c7d50eedc46e\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " Jan 29 11:14:08.885898 kubelet[2617]: I0129 11:14:08.885907 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-lib-modules\") pod \"dcbdd151-2455-4b84-934e-c7d50eedc46e\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " Jan 29 11:14:08.886239 kubelet[2617]: I0129 11:14:08.885922 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-host-proc-sys-net\") pod \"dcbdd151-2455-4b84-934e-c7d50eedc46e\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " Jan 29 11:14:08.886239 kubelet[2617]: I0129 11:14:08.885935 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-hostproc\") pod \"dcbdd151-2455-4b84-934e-c7d50eedc46e\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " Jan 29 11:14:08.886239 kubelet[2617]: I0129 11:14:08.885947 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-xtables-lock\") pod \"dcbdd151-2455-4b84-934e-c7d50eedc46e\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " Jan 29 11:14:08.886239 kubelet[2617]: I0129 11:14:08.885960 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-host-proc-sys-kernel\") pod \"dcbdd151-2455-4b84-934e-c7d50eedc46e\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " Jan 29 11:14:08.886239 kubelet[2617]: I0129 11:14:08.885975 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dcbdd151-2455-4b84-934e-c7d50eedc46e-hubble-tls\") pod \"dcbdd151-2455-4b84-934e-c7d50eedc46e\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " Jan 29 11:14:08.886239 kubelet[2617]: I0129 11:14:08.885986 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-cni-path\") pod \"dcbdd151-2455-4b84-934e-c7d50eedc46e\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " Jan 29 11:14:08.886481 kubelet[2617]: I0129 11:14:08.885999 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-cilium-run\") pod \"dcbdd151-2455-4b84-934e-c7d50eedc46e\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " Jan 29 11:14:08.886481 kubelet[2617]: I0129 11:14:08.886011 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-bpf-maps\") pod \"dcbdd151-2455-4b84-934e-c7d50eedc46e\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " Jan 29 11:14:08.886481 kubelet[2617]: I0129 11:14:08.886002 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dcbdd151-2455-4b84-934e-c7d50eedc46e" (UID: "dcbdd151-2455-4b84-934e-c7d50eedc46e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:14:08.886481 kubelet[2617]: I0129 11:14:08.886026 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxxvt\" (UniqueName: \"kubernetes.io/projected/c30ee7af-e066-4598-948f-65ad0b932440-kube-api-access-rxxvt\") pod \"c30ee7af-e066-4598-948f-65ad0b932440\" (UID: \"c30ee7af-e066-4598-948f-65ad0b932440\") " Jan 29 11:14:08.886481 kubelet[2617]: I0129 11:14:08.886075 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-cilium-cgroup\") pod \"dcbdd151-2455-4b84-934e-c7d50eedc46e\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " Jan 29 11:14:08.886481 kubelet[2617]: I0129 11:14:08.886109 2617 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 29 11:14:08.886928 kubelet[2617]: I0129 11:14:08.886134 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dcbdd151-2455-4b84-934e-c7d50eedc46e" (UID: "dcbdd151-2455-4b84-934e-c7d50eedc46e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:14:08.886928 kubelet[2617]: I0129 11:14:08.886158 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dcbdd151-2455-4b84-934e-c7d50eedc46e" (UID: "dcbdd151-2455-4b84-934e-c7d50eedc46e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:14:08.886928 kubelet[2617]: I0129 11:14:08.886451 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dcbdd151-2455-4b84-934e-c7d50eedc46e" (UID: "dcbdd151-2455-4b84-934e-c7d50eedc46e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:14:08.886928 kubelet[2617]: I0129 11:14:08.886477 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dcbdd151-2455-4b84-934e-c7d50eedc46e" (UID: "dcbdd151-2455-4b84-934e-c7d50eedc46e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:14:08.886928 kubelet[2617]: I0129 11:14:08.886495 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-hostproc" (OuterVolumeSpecName: "hostproc") pod "dcbdd151-2455-4b84-934e-c7d50eedc46e" (UID: "dcbdd151-2455-4b84-934e-c7d50eedc46e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:14:08.887103 kubelet[2617]: I0129 11:14:08.886508 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dcbdd151-2455-4b84-934e-c7d50eedc46e" (UID: "dcbdd151-2455-4b84-934e-c7d50eedc46e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:14:08.887103 kubelet[2617]: I0129 11:14:08.886522 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-cni-path" (OuterVolumeSpecName: "cni-path") pod "dcbdd151-2455-4b84-934e-c7d50eedc46e" (UID: "dcbdd151-2455-4b84-934e-c7d50eedc46e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:14:08.889467 kubelet[2617]: I0129 11:14:08.889443 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcbdd151-2455-4b84-934e-c7d50eedc46e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dcbdd151-2455-4b84-934e-c7d50eedc46e" (UID: "dcbdd151-2455-4b84-934e-c7d50eedc46e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:14:08.889528 kubelet[2617]: I0129 11:14:08.889481 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dcbdd151-2455-4b84-934e-c7d50eedc46e" (UID: "dcbdd151-2455-4b84-934e-c7d50eedc46e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:14:08.889528 kubelet[2617]: I0129 11:14:08.889498 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dcbdd151-2455-4b84-934e-c7d50eedc46e" (UID: "dcbdd151-2455-4b84-934e-c7d50eedc46e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:14:08.890055 kubelet[2617]: I0129 11:14:08.890034 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c30ee7af-e066-4598-948f-65ad0b932440-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c30ee7af-e066-4598-948f-65ad0b932440" (UID: "c30ee7af-e066-4598-948f-65ad0b932440"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:14:08.890177 kubelet[2617]: I0129 11:14:08.890158 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c30ee7af-e066-4598-948f-65ad0b932440-kube-api-access-rxxvt" (OuterVolumeSpecName: "kube-api-access-rxxvt") pod "c30ee7af-e066-4598-948f-65ad0b932440" (UID: "c30ee7af-e066-4598-948f-65ad0b932440"). InnerVolumeSpecName "kube-api-access-rxxvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:14:08.890535 kubelet[2617]: I0129 11:14:08.890515 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcbdd151-2455-4b84-934e-c7d50eedc46e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dcbdd151-2455-4b84-934e-c7d50eedc46e" (UID: "dcbdd151-2455-4b84-934e-c7d50eedc46e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:14:08.986299 kubelet[2617]: I0129 11:14:08.986253 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dcbdd151-2455-4b84-934e-c7d50eedc46e-clustermesh-secrets\") pod \"dcbdd151-2455-4b84-934e-c7d50eedc46e\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " Jan 29 11:14:08.986299 kubelet[2617]: I0129 11:14:08.986302 2617 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdxxt\" (UniqueName: \"kubernetes.io/projected/dcbdd151-2455-4b84-934e-c7d50eedc46e-kube-api-access-cdxxt\") pod \"dcbdd151-2455-4b84-934e-c7d50eedc46e\" (UID: \"dcbdd151-2455-4b84-934e-c7d50eedc46e\") " Jan 29 11:14:08.986510 kubelet[2617]: I0129 11:14:08.986341 2617 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c30ee7af-e066-4598-948f-65ad0b932440-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 11:14:08.986510 kubelet[2617]: I0129 11:14:08.986356 2617 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 29 11:14:08.986510 kubelet[2617]: I0129 11:14:08.986366 2617 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dcbdd151-2455-4b84-934e-c7d50eedc46e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 11:14:08.986510 kubelet[2617]: I0129 11:14:08.986379 2617 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 29 11:14:08.986510 kubelet[2617]: I0129 11:14:08.986390 2617 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 29 11:14:08.986510 kubelet[2617]: I0129 11:14:08.986402 2617 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 29 11:14:08.986510 kubelet[2617]: I0129 11:14:08.986430 2617 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 29 11:14:08.986510 kubelet[2617]: I0129 11:14:08.986444 2617 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dcbdd151-2455-4b84-934e-c7d50eedc46e-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 29 11:14:08.986720 kubelet[2617]: I0129 11:14:08.986454 2617 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 29 11:14:08.986720 kubelet[2617]: I0129 11:14:08.986463 2617 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 29 11:14:08.986720 kubelet[2617]: I0129 11:14:08.986473 2617 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 29 11:14:08.986720 kubelet[2617]: I0129 11:14:08.986484 2617 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rxxvt\" (UniqueName: \"kubernetes.io/projected/c30ee7af-e066-4598-948f-65ad0b932440-kube-api-access-rxxvt\") on node \"localhost\" DevicePath \"\"" Jan 29 11:14:08.986720 kubelet[2617]: I0129 11:14:08.986494 2617 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dcbdd151-2455-4b84-934e-c7d50eedc46e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 29 11:14:08.989891 kubelet[2617]: I0129 11:14:08.989855 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcbdd151-2455-4b84-934e-c7d50eedc46e-kube-api-access-cdxxt" (OuterVolumeSpecName: "kube-api-access-cdxxt") pod "dcbdd151-2455-4b84-934e-c7d50eedc46e" (UID: "dcbdd151-2455-4b84-934e-c7d50eedc46e"). InnerVolumeSpecName "kube-api-access-cdxxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:14:08.990061 kubelet[2617]: I0129 11:14:08.990035 2617 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcbdd151-2455-4b84-934e-c7d50eedc46e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dcbdd151-2455-4b84-934e-c7d50eedc46e" (UID: "dcbdd151-2455-4b84-934e-c7d50eedc46e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:14:09.087349 kubelet[2617]: I0129 11:14:09.087302 2617 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-cdxxt\" (UniqueName: \"kubernetes.io/projected/dcbdd151-2455-4b84-934e-c7d50eedc46e-kube-api-access-cdxxt\") on node \"localhost\" DevicePath \"\"" Jan 29 11:14:09.087349 kubelet[2617]: I0129 11:14:09.087339 2617 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dcbdd151-2455-4b84-934e-c7d50eedc46e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 29 11:14:09.424553 systemd[1]: Removed slice kubepods-besteffort-podc30ee7af_e066_4598_948f_65ad0b932440.slice - libcontainer container kubepods-besteffort-podc30ee7af_e066_4598_948f_65ad0b932440.slice. Jan 29 11:14:09.425914 systemd[1]: Removed slice kubepods-burstable-poddcbdd151_2455_4b84_934e_c7d50eedc46e.slice - libcontainer container kubepods-burstable-poddcbdd151_2455_4b84_934e_c7d50eedc46e.slice. Jan 29 11:14:09.425996 systemd[1]: kubepods-burstable-poddcbdd151_2455_4b84_934e_c7d50eedc46e.slice: Consumed 7.703s CPU time. Jan 29 11:14:09.493384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce3f662e986fba138ca3ecd5441906b9f31c1c970483d8cfcc475ea2a9291ce1-rootfs.mount: Deactivated successfully. Jan 29 11:14:09.493547 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ce3f662e986fba138ca3ecd5441906b9f31c1c970483d8cfcc475ea2a9291ce1-shm.mount: Deactivated successfully. Jan 29 11:14:09.493658 systemd[1]: var-lib-kubelet-pods-dcbdd151\x2d2455\x2d4b84\x2d934e\x2dc7d50eedc46e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcdxxt.mount: Deactivated successfully. Jan 29 11:14:09.493781 systemd[1]: var-lib-kubelet-pods-dcbdd151\x2d2455\x2d4b84\x2d934e\x2dc7d50eedc46e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 11:14:09.493914 systemd[1]: var-lib-kubelet-pods-dcbdd151\x2d2455\x2d4b84\x2d934e\x2dc7d50eedc46e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 11:14:09.494034 systemd[1]: var-lib-kubelet-pods-c30ee7af\x2de066\x2d4598\x2d948f\x2d65ad0b932440-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drxxvt.mount: Deactivated successfully. Jan 29 11:14:09.626316 kubelet[2617]: I0129 11:14:09.626277 2617 scope.go:117] "RemoveContainer" containerID="bb2cd4f3891edb73e5ed55b39c5fa4fd743b0b7b99d54a3acadea35f7e655afd" Jan 29 11:14:09.628171 containerd[1490]: time="2025-01-29T11:14:09.628116442Z" level=info msg="RemoveContainer for \"bb2cd4f3891edb73e5ed55b39c5fa4fd743b0b7b99d54a3acadea35f7e655afd\"" Jan 29 11:14:09.635111 containerd[1490]: time="2025-01-29T11:14:09.635073637Z" level=info msg="RemoveContainer for \"bb2cd4f3891edb73e5ed55b39c5fa4fd743b0b7b99d54a3acadea35f7e655afd\" returns successfully" Jan 29 11:14:09.635349 kubelet[2617]: I0129 11:14:09.635324 2617 scope.go:117] "RemoveContainer" containerID="f34c5cabcd32fe76feb454943b0232253ec3e13dfa23c6c22eb487116ec0a3da" Jan 29 11:14:09.636253 containerd[1490]: time="2025-01-29T11:14:09.636222086Z" level=info msg="RemoveContainer for \"f34c5cabcd32fe76feb454943b0232253ec3e13dfa23c6c22eb487116ec0a3da\"" Jan 29 11:14:09.640315 containerd[1490]: time="2025-01-29T11:14:09.640050136Z" level=info msg="RemoveContainer for \"f34c5cabcd32fe76feb454943b0232253ec3e13dfa23c6c22eb487116ec0a3da\" returns successfully" Jan 29 11:14:09.640437 kubelet[2617]: I0129 11:14:09.640312 2617 scope.go:117] "RemoveContainer" containerID="36b3e10955bbe9ed49bf92c1b84cf336bf1990879730981e98e8012feabdcf13" Jan 29 11:14:09.641818 containerd[1490]: time="2025-01-29T11:14:09.641784064Z" level=info msg="RemoveContainer for \"36b3e10955bbe9ed49bf92c1b84cf336bf1990879730981e98e8012feabdcf13\"" Jan 29 11:14:09.645477 containerd[1490]: time="2025-01-29T11:14:09.645430902Z" level=info msg="RemoveContainer for \"36b3e10955bbe9ed49bf92c1b84cf336bf1990879730981e98e8012feabdcf13\" returns successfully" Jan 29 11:14:09.645690 kubelet[2617]: I0129 11:14:09.645653 2617 scope.go:117] "RemoveContainer" containerID="83d86e7f799bcfee63d152819e73d696282a5a236568bed9484117f846a21333" Jan 29 11:14:09.646946 containerd[1490]: time="2025-01-29T11:14:09.646913704Z" level=info msg="RemoveContainer for \"83d86e7f799bcfee63d152819e73d696282a5a236568bed9484117f846a21333\"" Jan 29 11:14:09.656448 containerd[1490]: time="2025-01-29T11:14:09.653381872Z" level=info msg="RemoveContainer for \"83d86e7f799bcfee63d152819e73d696282a5a236568bed9484117f846a21333\" returns successfully" Jan 29 11:14:09.656448 containerd[1490]: time="2025-01-29T11:14:09.655115811Z" level=info msg="RemoveContainer for \"02f0dd583b2479d42ab412258d5cb582c75708b78572b2b1e94c7d8b3832bf1c\"" Jan 29 11:14:09.656614 kubelet[2617]: I0129 11:14:09.653725 2617 scope.go:117] "RemoveContainer" containerID="02f0dd583b2479d42ab412258d5cb582c75708b78572b2b1e94c7d8b3832bf1c" Jan 29 11:14:09.662452 containerd[1490]: time="2025-01-29T11:14:09.662380940Z" level=info msg="RemoveContainer for \"02f0dd583b2479d42ab412258d5cb582c75708b78572b2b1e94c7d8b3832bf1c\" returns successfully" Jan 29 11:14:09.672667 kubelet[2617]: I0129 11:14:09.672610 2617 scope.go:117] "RemoveContainer" containerID="bb2cd4f3891edb73e5ed55b39c5fa4fd743b0b7b99d54a3acadea35f7e655afd" Jan 29 11:14:09.673074 containerd[1490]: time="2025-01-29T11:14:09.672997646Z" level=error msg="ContainerStatus for \"bb2cd4f3891edb73e5ed55b39c5fa4fd743b0b7b99d54a3acadea35f7e655afd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb2cd4f3891edb73e5ed55b39c5fa4fd743b0b7b99d54a3acadea35f7e655afd\": not found" Jan 29 11:14:09.673212 kubelet[2617]: E0129 11:14:09.673187 2617 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb2cd4f3891edb73e5ed55b39c5fa4fd743b0b7b99d54a3acadea35f7e655afd\": not found" containerID="bb2cd4f3891edb73e5ed55b39c5fa4fd743b0b7b99d54a3acadea35f7e655afd" Jan 29 11:14:09.673301 kubelet[2617]: I0129 11:14:09.673222 2617 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb2cd4f3891edb73e5ed55b39c5fa4fd743b0b7b99d54a3acadea35f7e655afd"} err="failed to get container status \"bb2cd4f3891edb73e5ed55b39c5fa4fd743b0b7b99d54a3acadea35f7e655afd\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb2cd4f3891edb73e5ed55b39c5fa4fd743b0b7b99d54a3acadea35f7e655afd\": not found" Jan 29 11:14:09.673301 kubelet[2617]: I0129 11:14:09.673299 2617 scope.go:117] "RemoveContainer" containerID="f34c5cabcd32fe76feb454943b0232253ec3e13dfa23c6c22eb487116ec0a3da" Jan 29 11:14:09.673499 containerd[1490]: time="2025-01-29T11:14:09.673471735Z" level=error msg="ContainerStatus for \"f34c5cabcd32fe76feb454943b0232253ec3e13dfa23c6c22eb487116ec0a3da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f34c5cabcd32fe76feb454943b0232253ec3e13dfa23c6c22eb487116ec0a3da\": not found" Jan 29 11:14:09.673599 kubelet[2617]: E0129 11:14:09.673578 2617 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f34c5cabcd32fe76feb454943b0232253ec3e13dfa23c6c22eb487116ec0a3da\": not found" containerID="f34c5cabcd32fe76feb454943b0232253ec3e13dfa23c6c22eb487116ec0a3da" Jan 29 11:14:09.674477 kubelet[2617]: I0129 11:14:09.674454 2617 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f34c5cabcd32fe76feb454943b0232253ec3e13dfa23c6c22eb487116ec0a3da"} err="failed to get container status \"f34c5cabcd32fe76feb454943b0232253ec3e13dfa23c6c22eb487116ec0a3da\": rpc error: code = NotFound desc = an error occurred when try to find container \"f34c5cabcd32fe76feb454943b0232253ec3e13dfa23c6c22eb487116ec0a3da\": not found" Jan 29 11:14:09.674477 kubelet[2617]: I0129 11:14:09.674475 2617 scope.go:117] "RemoveContainer" containerID="36b3e10955bbe9ed49bf92c1b84cf336bf1990879730981e98e8012feabdcf13" Jan 29 11:14:09.674786 containerd[1490]: time="2025-01-29T11:14:09.674695597Z" level=error msg="ContainerStatus for \"36b3e10955bbe9ed49bf92c1b84cf336bf1990879730981e98e8012feabdcf13\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"36b3e10955bbe9ed49bf92c1b84cf336bf1990879730981e98e8012feabdcf13\": not found" Jan 29 11:14:09.674824 kubelet[2617]: E0129 11:14:09.674788 2617 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"36b3e10955bbe9ed49bf92c1b84cf336bf1990879730981e98e8012feabdcf13\": not found" containerID="36b3e10955bbe9ed49bf92c1b84cf336bf1990879730981e98e8012feabdcf13" Jan 29 11:14:09.674824 kubelet[2617]: I0129 11:14:09.674808 2617 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"36b3e10955bbe9ed49bf92c1b84cf336bf1990879730981e98e8012feabdcf13"} err="failed to get container status \"36b3e10955bbe9ed49bf92c1b84cf336bf1990879730981e98e8012feabdcf13\": rpc error: code = NotFound desc = an error occurred when try to find container \"36b3e10955bbe9ed49bf92c1b84cf336bf1990879730981e98e8012feabdcf13\": not found" Jan 29 11:14:09.674824 kubelet[2617]: I0129 11:14:09.674823 2617 scope.go:117] "RemoveContainer" containerID="83d86e7f799bcfee63d152819e73d696282a5a236568bed9484117f846a21333" Jan 29 11:14:09.675141 containerd[1490]: time="2025-01-29T11:14:09.675019992Z" level=error msg="ContainerStatus for \"83d86e7f799bcfee63d152819e73d696282a5a236568bed9484117f846a21333\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83d86e7f799bcfee63d152819e73d696282a5a236568bed9484117f846a21333\": not found" Jan 29 11:14:09.675223 kubelet[2617]: E0129 11:14:09.675203 2617 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83d86e7f799bcfee63d152819e73d696282a5a236568bed9484117f846a21333\": not found" containerID="83d86e7f799bcfee63d152819e73d696282a5a236568bed9484117f846a21333" Jan 29 11:14:09.675259 kubelet[2617]: I0129 11:14:09.675225 2617 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"83d86e7f799bcfee63d152819e73d696282a5a236568bed9484117f846a21333"} err="failed to get container status \"83d86e7f799bcfee63d152819e73d696282a5a236568bed9484117f846a21333\": rpc error: code = NotFound desc = an error occurred when try to find container \"83d86e7f799bcfee63d152819e73d696282a5a236568bed9484117f846a21333\": not found" Jan 29 11:14:09.675259 kubelet[2617]: I0129 11:14:09.675237 2617 scope.go:117] "RemoveContainer" containerID="02f0dd583b2479d42ab412258d5cb582c75708b78572b2b1e94c7d8b3832bf1c" Jan 29 11:14:09.675374 containerd[1490]: time="2025-01-29T11:14:09.675348505Z" level=error msg="ContainerStatus for \"02f0dd583b2479d42ab412258d5cb582c75708b78572b2b1e94c7d8b3832bf1c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"02f0dd583b2479d42ab412258d5cb582c75708b78572b2b1e94c7d8b3832bf1c\": not found" Jan 29 11:14:09.678609 kubelet[2617]: E0129 11:14:09.678571 2617 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"02f0dd583b2479d42ab412258d5cb582c75708b78572b2b1e94c7d8b3832bf1c\": not found" containerID="02f0dd583b2479d42ab412258d5cb582c75708b78572b2b1e94c7d8b3832bf1c" Jan 29 11:14:09.678751 kubelet[2617]: I0129 11:14:09.678609 2617 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"02f0dd583b2479d42ab412258d5cb582c75708b78572b2b1e94c7d8b3832bf1c"} err="failed to get container status \"02f0dd583b2479d42ab412258d5cb582c75708b78572b2b1e94c7d8b3832bf1c\": rpc error: code = NotFound desc = an error occurred when try to find container \"02f0dd583b2479d42ab412258d5cb582c75708b78572b2b1e94c7d8b3832bf1c\": not found" Jan 29 11:14:09.678751 kubelet[2617]: I0129 11:14:09.678629 2617 scope.go:117] "RemoveContainer" containerID="714830c1a85ee08b9caa06abec1a5bb512d58e3aae229ae23dc74567f8976b51" Jan 29 11:14:09.680343 containerd[1490]: time="2025-01-29T11:14:09.680303624Z" level=info msg="RemoveContainer for \"714830c1a85ee08b9caa06abec1a5bb512d58e3aae229ae23dc74567f8976b51\"" Jan 29 11:14:09.683865 containerd[1490]: time="2025-01-29T11:14:09.683812218Z" level=info msg="RemoveContainer for \"714830c1a85ee08b9caa06abec1a5bb512d58e3aae229ae23dc74567f8976b51\" returns successfully" Jan 29 11:14:10.382321 sshd[4251]: Connection closed by 10.0.0.1 port 57428 Jan 29 11:14:10.382755 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:10.393814 systemd[1]: sshd@24-10.0.0.80:22-10.0.0.1:57428.service: Deactivated successfully. Jan 29 11:14:10.396431 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 11:14:10.398274 systemd-logind[1472]: Session 24 logged out. Waiting for processes to exit. Jan 29 11:14:10.406735 systemd[1]: Started sshd@25-10.0.0.80:22-10.0.0.1:57444.service - OpenSSH per-connection server daemon (10.0.0.1:57444). Jan 29 11:14:10.407880 systemd-logind[1472]: Removed session 24. Jan 29 11:14:10.444063 sshd[4408]: Accepted publickey for core from 10.0.0.1 port 57444 ssh2: RSA SHA256:lkD44HqWGay7B41vMEzuR4B7mrONU8O33s9NYSBbVFY Jan 29 11:14:10.445620 sshd-session[4408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:10.449860 systemd-logind[1472]: New session 25 of user core. Jan 29 11:14:10.460686 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 11:14:11.015291 sshd[4410]: Connection closed by 10.0.0.1 port 57444 Jan 29 11:14:11.016816 sshd-session[4408]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:11.025886 systemd[1]: sshd@25-10.0.0.80:22-10.0.0.1:57444.service: Deactivated successfully. Jan 29 11:14:11.031443 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 11:14:11.031633 kubelet[2617]: E0129 11:14:11.031587 2617 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dcbdd151-2455-4b84-934e-c7d50eedc46e" containerName="apply-sysctl-overwrites" Jan 29 11:14:11.031633 kubelet[2617]: E0129 11:14:11.031631 2617 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dcbdd151-2455-4b84-934e-c7d50eedc46e" containerName="cilium-agent" Jan 29 11:14:11.031633 kubelet[2617]: E0129 11:14:11.031638 2617 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dcbdd151-2455-4b84-934e-c7d50eedc46e" containerName="mount-cgroup" Jan 29 11:14:11.032029 kubelet[2617]: E0129 11:14:11.031645 2617 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dcbdd151-2455-4b84-934e-c7d50eedc46e" containerName="mount-bpf-fs" Jan 29 11:14:11.032029 kubelet[2617]: E0129 11:14:11.031652 2617 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c30ee7af-e066-4598-948f-65ad0b932440" containerName="cilium-operator" Jan 29 11:14:11.032029 kubelet[2617]: E0129 11:14:11.031658 2617 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dcbdd151-2455-4b84-934e-c7d50eedc46e" containerName="clean-cilium-state" Jan 29 11:14:11.032029 kubelet[2617]: I0129 11:14:11.031695 2617 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbdd151-2455-4b84-934e-c7d50eedc46e" containerName="cilium-agent" Jan 29 11:14:11.032029 kubelet[2617]: I0129 11:14:11.031703 2617 memory_manager.go:354] "RemoveStaleState removing state" podUID="c30ee7af-e066-4598-948f-65ad0b932440" containerName="cilium-operator" Jan 29 11:14:11.036123 systemd-logind[1472]: Session 25 logged out. Waiting for processes to exit. Jan 29 11:14:11.050352 systemd[1]: Started sshd@26-10.0.0.80:22-10.0.0.1:46016.service - OpenSSH per-connection server daemon (10.0.0.1:46016). Jan 29 11:14:11.051784 systemd-logind[1472]: Removed session 25. Jan 29 11:14:11.062736 systemd[1]: Created slice kubepods-burstable-podb6528f00_a7c8_4582_a254_997cb74adfb8.slice - libcontainer container kubepods-burstable-podb6528f00_a7c8_4582_a254_997cb74adfb8.slice. Jan 29 11:14:11.094521 sshd[4421]: Accepted publickey for core from 10.0.0.1 port 46016 ssh2: RSA SHA256:lkD44HqWGay7B41vMEzuR4B7mrONU8O33s9NYSBbVFY Jan 29 11:14:11.096374 sshd-session[4421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:11.101085 systemd-logind[1472]: New session 26 of user core. Jan 29 11:14:11.115561 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 11:14:11.168394 sshd[4423]: Connection closed by 10.0.0.1 port 46016 Jan 29 11:14:11.168793 sshd-session[4421]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:11.178680 systemd[1]: sshd@26-10.0.0.80:22-10.0.0.1:46016.service: Deactivated successfully. Jan 29 11:14:11.180862 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 11:14:11.182760 systemd-logind[1472]: Session 26 logged out. Waiting for processes to exit. Jan 29 11:14:11.184438 systemd[1]: Started sshd@27-10.0.0.80:22-10.0.0.1:46024.service - OpenSSH per-connection server daemon (10.0.0.1:46024). Jan 29 11:14:11.185429 systemd-logind[1472]: Removed session 26. Jan 29 11:14:11.197377 kubelet[2617]: I0129 11:14:11.197339 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6528f00-a7c8-4582-a254-997cb74adfb8-xtables-lock\") pod \"cilium-t45xw\" (UID: \"b6528f00-a7c8-4582-a254-997cb74adfb8\") " pod="kube-system/cilium-t45xw" Jan 29 11:14:11.197473 kubelet[2617]: I0129 11:14:11.197387 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6528f00-a7c8-4582-a254-997cb74adfb8-cilium-config-path\") pod \"cilium-t45xw\" (UID: \"b6528f00-a7c8-4582-a254-997cb74adfb8\") " pod="kube-system/cilium-t45xw" Jan 29 11:14:11.197473 kubelet[2617]: I0129 11:14:11.197432 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6528f00-a7c8-4582-a254-997cb74adfb8-etc-cni-netd\") pod \"cilium-t45xw\" (UID: \"b6528f00-a7c8-4582-a254-997cb74adfb8\") " pod="kube-system/cilium-t45xw" Jan 29 11:14:11.197473 kubelet[2617]: I0129 11:14:11.197450 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b6528f00-a7c8-4582-a254-997cb74adfb8-bpf-maps\") pod \"cilium-t45xw\" (UID: \"b6528f00-a7c8-4582-a254-997cb74adfb8\") " pod="kube-system/cilium-t45xw" Jan 29 11:14:11.197590 kubelet[2617]: I0129 11:14:11.197553 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b6528f00-a7c8-4582-a254-997cb74adfb8-clustermesh-secrets\") pod \"cilium-t45xw\" (UID: \"b6528f00-a7c8-4582-a254-997cb74adfb8\") " pod="kube-system/cilium-t45xw" Jan 29 11:14:11.197651 kubelet[2617]: I0129 11:14:11.197609 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b6528f00-a7c8-4582-a254-997cb74adfb8-cilium-run\") pod \"cilium-t45xw\" (UID: \"b6528f00-a7c8-4582-a254-997cb74adfb8\") " pod="kube-system/cilium-t45xw" Jan 29 11:14:11.197651 kubelet[2617]: I0129 11:14:11.197644 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b6528f00-a7c8-4582-a254-997cb74adfb8-host-proc-sys-net\") pod \"cilium-t45xw\" (UID: \"b6528f00-a7c8-4582-a254-997cb74adfb8\") " pod="kube-system/cilium-t45xw" Jan 29 11:14:11.197702 kubelet[2617]: I0129 11:14:11.197666 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx85s\" (UniqueName: \"kubernetes.io/projected/b6528f00-a7c8-4582-a254-997cb74adfb8-kube-api-access-kx85s\") pod \"cilium-t45xw\" (UID: \"b6528f00-a7c8-4582-a254-997cb74adfb8\") " pod="kube-system/cilium-t45xw" Jan 29 11:14:11.197734 kubelet[2617]: I0129 11:14:11.197699 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b6528f00-a7c8-4582-a254-997cb74adfb8-cilium-ipsec-secrets\") pod \"cilium-t45xw\" (UID: \"b6528f00-a7c8-4582-a254-997cb74adfb8\") " pod="kube-system/cilium-t45xw" Jan 29 11:14:11.197734 kubelet[2617]: I0129 11:14:11.197721 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b6528f00-a7c8-4582-a254-997cb74adfb8-host-proc-sys-kernel\") pod \"cilium-t45xw\" (UID: \"b6528f00-a7c8-4582-a254-997cb74adfb8\") " pod="kube-system/cilium-t45xw" Jan 29 11:14:11.197780 kubelet[2617]: I0129 11:14:11.197761 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b6528f00-a7c8-4582-a254-997cb74adfb8-hostproc\") pod \"cilium-t45xw\" (UID: \"b6528f00-a7c8-4582-a254-997cb74adfb8\") " pod="kube-system/cilium-t45xw" Jan 29 11:14:11.197803 kubelet[2617]: I0129 11:14:11.197782 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b6528f00-a7c8-4582-a254-997cb74adfb8-cilium-cgroup\") pod \"cilium-t45xw\" (UID: \"b6528f00-a7c8-4582-a254-997cb74adfb8\") " pod="kube-system/cilium-t45xw" Jan 29 11:14:11.197826 kubelet[2617]: I0129 11:14:11.197808 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b6528f00-a7c8-4582-a254-997cb74adfb8-cni-path\") pod \"cilium-t45xw\" (UID: \"b6528f00-a7c8-4582-a254-997cb74adfb8\") " pod="kube-system/cilium-t45xw" Jan 29 11:14:11.197863 kubelet[2617]: I0129 11:14:11.197840 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6528f00-a7c8-4582-a254-997cb74adfb8-lib-modules\") pod \"cilium-t45xw\" (UID: \"b6528f00-a7c8-4582-a254-997cb74adfb8\") " pod="kube-system/cilium-t45xw" Jan 29 11:14:11.197900 kubelet[2617]: I0129 11:14:11.197861 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b6528f00-a7c8-4582-a254-997cb74adfb8-hubble-tls\") pod \"cilium-t45xw\" (UID: \"b6528f00-a7c8-4582-a254-997cb74adfb8\") " pod="kube-system/cilium-t45xw" Jan 29 11:14:11.227392 sshd[4429]: Accepted publickey for core from 10.0.0.1 port 46024 ssh2: RSA SHA256:lkD44HqWGay7B41vMEzuR4B7mrONU8O33s9NYSBbVFY Jan 29 11:14:11.229221 sshd-session[4429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:11.234187 systemd-logind[1472]: New session 27 of user core. Jan 29 11:14:11.245645 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 11:14:11.368643 kubelet[2617]: E0129 11:14:11.368482 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:11.369768 containerd[1490]: time="2025-01-29T11:14:11.369295183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t45xw,Uid:b6528f00-a7c8-4582-a254-997cb74adfb8,Namespace:kube-system,Attempt:0,}" Jan 29 11:14:11.395230 containerd[1490]: time="2025-01-29T11:14:11.394802868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:14:11.395230 containerd[1490]: time="2025-01-29T11:14:11.394947633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:14:11.395230 containerd[1490]: time="2025-01-29T11:14:11.394969273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:11.395230 containerd[1490]: time="2025-01-29T11:14:11.395112415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:11.415605 systemd[1]: Started cri-containerd-67a19ed95e1b3810371483c246edb881d0e5c7175b10dbe867a56261715c28b6.scope - libcontainer container 67a19ed95e1b3810371483c246edb881d0e5c7175b10dbe867a56261715c28b6. Jan 29 11:14:11.417177 kubelet[2617]: E0129 11:14:11.417146 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:11.419719 kubelet[2617]: I0129 11:14:11.419680 2617 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c30ee7af-e066-4598-948f-65ad0b932440" path="/var/lib/kubelet/pods/c30ee7af-e066-4598-948f-65ad0b932440/volumes" Jan 29 11:14:11.421141 kubelet[2617]: I0129 11:14:11.421091 2617 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcbdd151-2455-4b84-934e-c7d50eedc46e" path="/var/lib/kubelet/pods/dcbdd151-2455-4b84-934e-c7d50eedc46e/volumes" Jan 29 11:14:11.443672 containerd[1490]: time="2025-01-29T11:14:11.443608318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t45xw,Uid:b6528f00-a7c8-4582-a254-997cb74adfb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"67a19ed95e1b3810371483c246edb881d0e5c7175b10dbe867a56261715c28b6\"" Jan 29 11:14:11.444673 kubelet[2617]: E0129 11:14:11.444639 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:11.447340 containerd[1490]: time="2025-01-29T11:14:11.447297972Z" level=info msg="CreateContainer within sandbox \"67a19ed95e1b3810371483c246edb881d0e5c7175b10dbe867a56261715c28b6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:14:11.462713 containerd[1490]: time="2025-01-29T11:14:11.462657569Z" level=info msg="CreateContainer within sandbox \"67a19ed95e1b3810371483c246edb881d0e5c7175b10dbe867a56261715c28b6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a33878fa45ae07023010a5498651b684b632c94641a05e1f4898472197e8f110\"" Jan 29 11:14:11.463683 containerd[1490]: time="2025-01-29T11:14:11.463644570Z" level=info msg="StartContainer for \"a33878fa45ae07023010a5498651b684b632c94641a05e1f4898472197e8f110\"" Jan 29 11:14:11.500580 systemd[1]: Started cri-containerd-a33878fa45ae07023010a5498651b684b632c94641a05e1f4898472197e8f110.scope - libcontainer container a33878fa45ae07023010a5498651b684b632c94641a05e1f4898472197e8f110. Jan 29 11:14:11.533797 containerd[1490]: time="2025-01-29T11:14:11.533730330Z" level=info msg="StartContainer for \"a33878fa45ae07023010a5498651b684b632c94641a05e1f4898472197e8f110\" returns successfully" Jan 29 11:14:11.542106 systemd[1]: cri-containerd-a33878fa45ae07023010a5498651b684b632c94641a05e1f4898472197e8f110.scope: Deactivated successfully. Jan 29 11:14:11.575636 containerd[1490]: time="2025-01-29T11:14:11.575557371Z" level=info msg="shim disconnected" id=a33878fa45ae07023010a5498651b684b632c94641a05e1f4898472197e8f110 namespace=k8s.io Jan 29 11:14:11.575636 containerd[1490]: time="2025-01-29T11:14:11.575625850Z" level=warning msg="cleaning up after shim disconnected" id=a33878fa45ae07023010a5498651b684b632c94641a05e1f4898472197e8f110 namespace=k8s.io Jan 29 11:14:11.575636 containerd[1490]: time="2025-01-29T11:14:11.575634768Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:14:11.634381 kubelet[2617]: E0129 11:14:11.634258 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:11.636401 containerd[1490]: time="2025-01-29T11:14:11.636362097Z" level=info msg="CreateContainer within sandbox \"67a19ed95e1b3810371483c246edb881d0e5c7175b10dbe867a56261715c28b6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:14:11.730837 containerd[1490]: time="2025-01-29T11:14:11.730762070Z" level=info msg="CreateContainer within sandbox \"67a19ed95e1b3810371483c246edb881d0e5c7175b10dbe867a56261715c28b6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ad125e618167a2425b3d77745d1549ce3c6f20893259cabb7ee60c6dde3711f1\"" Jan 29 11:14:11.731309 containerd[1490]: time="2025-01-29T11:14:11.731276395Z" level=info msg="StartContainer for \"ad125e618167a2425b3d77745d1549ce3c6f20893259cabb7ee60c6dde3711f1\"" Jan 29 11:14:11.762570 systemd[1]: Started cri-containerd-ad125e618167a2425b3d77745d1549ce3c6f20893259cabb7ee60c6dde3711f1.scope - libcontainer container ad125e618167a2425b3d77745d1549ce3c6f20893259cabb7ee60c6dde3711f1. Jan 29 11:14:11.788465 containerd[1490]: time="2025-01-29T11:14:11.788402729Z" level=info msg="StartContainer for \"ad125e618167a2425b3d77745d1549ce3c6f20893259cabb7ee60c6dde3711f1\" returns successfully" Jan 29 11:14:11.796915 systemd[1]: cri-containerd-ad125e618167a2425b3d77745d1549ce3c6f20893259cabb7ee60c6dde3711f1.scope: Deactivated successfully. Jan 29 11:14:11.819960 containerd[1490]: time="2025-01-29T11:14:11.819897372Z" level=info msg="shim disconnected" id=ad125e618167a2425b3d77745d1549ce3c6f20893259cabb7ee60c6dde3711f1 namespace=k8s.io Jan 29 11:14:11.819960 containerd[1490]: time="2025-01-29T11:14:11.819951584Z" level=warning msg="cleaning up after shim disconnected" id=ad125e618167a2425b3d77745d1549ce3c6f20893259cabb7ee60c6dde3711f1 namespace=k8s.io Jan 29 11:14:11.819960 containerd[1490]: time="2025-01-29T11:14:11.819960212Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:14:12.466423 kubelet[2617]: E0129 11:14:12.466369 2617 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:14:12.637240 kubelet[2617]: E0129 11:14:12.637203 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:12.640571 containerd[1490]: time="2025-01-29T11:14:12.640522225Z" level=info msg="CreateContainer within sandbox \"67a19ed95e1b3810371483c246edb881d0e5c7175b10dbe867a56261715c28b6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:14:12.659375 containerd[1490]: time="2025-01-29T11:14:12.659323797Z" level=info msg="CreateContainer within sandbox \"67a19ed95e1b3810371483c246edb881d0e5c7175b10dbe867a56261715c28b6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6c842342e04856698e82f78a1c9c9bc0d3f027a6bebd94b19d6222ac5896053c\"" Jan 29 11:14:12.660055 containerd[1490]: time="2025-01-29T11:14:12.660000460Z" level=info msg="StartContainer for \"6c842342e04856698e82f78a1c9c9bc0d3f027a6bebd94b19d6222ac5896053c\"" Jan 29 11:14:12.696569 systemd[1]: Started cri-containerd-6c842342e04856698e82f78a1c9c9bc0d3f027a6bebd94b19d6222ac5896053c.scope - libcontainer container 6c842342e04856698e82f78a1c9c9bc0d3f027a6bebd94b19d6222ac5896053c. Jan 29 11:14:12.731692 containerd[1490]: time="2025-01-29T11:14:12.731561403Z" level=info msg="StartContainer for \"6c842342e04856698e82f78a1c9c9bc0d3f027a6bebd94b19d6222ac5896053c\" returns successfully" Jan 29 11:14:12.731882 systemd[1]: cri-containerd-6c842342e04856698e82f78a1c9c9bc0d3f027a6bebd94b19d6222ac5896053c.scope: Deactivated successfully. Jan 29 11:14:12.761192 containerd[1490]: time="2025-01-29T11:14:12.761100516Z" level=info msg="shim disconnected" id=6c842342e04856698e82f78a1c9c9bc0d3f027a6bebd94b19d6222ac5896053c namespace=k8s.io Jan 29 11:14:12.761192 containerd[1490]: time="2025-01-29T11:14:12.761173815Z" level=warning msg="cleaning up after shim disconnected" id=6c842342e04856698e82f78a1c9c9bc0d3f027a6bebd94b19d6222ac5896053c namespace=k8s.io Jan 29 11:14:12.761192 containerd[1490]: time="2025-01-29T11:14:12.761185597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:14:13.304294 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c842342e04856698e82f78a1c9c9bc0d3f027a6bebd94b19d6222ac5896053c-rootfs.mount: Deactivated successfully. Jan 29 11:14:13.641288 kubelet[2617]: E0129 11:14:13.641134 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:13.643146 containerd[1490]: time="2025-01-29T11:14:13.642936327Z" level=info msg="CreateContainer within sandbox \"67a19ed95e1b3810371483c246edb881d0e5c7175b10dbe867a56261715c28b6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:14:13.690353 containerd[1490]: time="2025-01-29T11:14:13.690307132Z" level=info msg="CreateContainer within sandbox \"67a19ed95e1b3810371483c246edb881d0e5c7175b10dbe867a56261715c28b6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"97e9e9489df6731dc039c59b04a08081890fc0057d74d8505863c642b1fbd9b0\"" Jan 29 11:14:13.691890 containerd[1490]: time="2025-01-29T11:14:13.690975709Z" level=info msg="StartContainer for \"97e9e9489df6731dc039c59b04a08081890fc0057d74d8505863c642b1fbd9b0\"" Jan 29 11:14:13.729605 systemd[1]: Started cri-containerd-97e9e9489df6731dc039c59b04a08081890fc0057d74d8505863c642b1fbd9b0.scope - libcontainer container 97e9e9489df6731dc039c59b04a08081890fc0057d74d8505863c642b1fbd9b0. Jan 29 11:14:13.761317 systemd[1]: cri-containerd-97e9e9489df6731dc039c59b04a08081890fc0057d74d8505863c642b1fbd9b0.scope: Deactivated successfully. Jan 29 11:14:13.762488 containerd[1490]: time="2025-01-29T11:14:13.762448782Z" level=info msg="StartContainer for \"97e9e9489df6731dc039c59b04a08081890fc0057d74d8505863c642b1fbd9b0\" returns successfully" Jan 29 11:14:13.786318 containerd[1490]: time="2025-01-29T11:14:13.786247876Z" level=info msg="shim disconnected" id=97e9e9489df6731dc039c59b04a08081890fc0057d74d8505863c642b1fbd9b0 namespace=k8s.io Jan 29 11:14:13.786318 containerd[1490]: time="2025-01-29T11:14:13.786309493Z" level=warning msg="cleaning up after shim disconnected" id=97e9e9489df6731dc039c59b04a08081890fc0057d74d8505863c642b1fbd9b0 namespace=k8s.io Jan 29 11:14:13.786318 containerd[1490]: time="2025-01-29T11:14:13.786320053Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:14:14.304378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97e9e9489df6731dc039c59b04a08081890fc0057d74d8505863c642b1fbd9b0-rootfs.mount: Deactivated successfully. Jan 29 11:14:14.645622 kubelet[2617]: E0129 11:14:14.645048 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:14.647208 containerd[1490]: time="2025-01-29T11:14:14.647158174Z" level=info msg="CreateContainer within sandbox \"67a19ed95e1b3810371483c246edb881d0e5c7175b10dbe867a56261715c28b6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:14:14.946303 containerd[1490]: time="2025-01-29T11:14:14.946245608Z" level=info msg="CreateContainer within sandbox \"67a19ed95e1b3810371483c246edb881d0e5c7175b10dbe867a56261715c28b6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"08b6f4ea3277bb6f7bf336027e0902f39aa3914a9807b0fa8d127535bbc4dcc3\"" Jan 29 11:14:14.946920 containerd[1490]: time="2025-01-29T11:14:14.946836997Z" level=info msg="StartContainer for \"08b6f4ea3277bb6f7bf336027e0902f39aa3914a9807b0fa8d127535bbc4dcc3\"" Jan 29 11:14:14.977572 systemd[1]: Started cri-containerd-08b6f4ea3277bb6f7bf336027e0902f39aa3914a9807b0fa8d127535bbc4dcc3.scope - libcontainer container 08b6f4ea3277bb6f7bf336027e0902f39aa3914a9807b0fa8d127535bbc4dcc3. Jan 29 11:14:15.090154 containerd[1490]: time="2025-01-29T11:14:15.090090435Z" level=info msg="StartContainer for \"08b6f4ea3277bb6f7bf336027e0902f39aa3914a9807b0fa8d127535bbc4dcc3\" returns successfully" Jan 29 11:14:15.486509 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 29 11:14:15.650317 kubelet[2617]: E0129 11:14:15.650270 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:17.369122 kubelet[2617]: E0129 11:14:17.369067 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:18.788429 systemd-networkd[1416]: lxc_health: Link UP Jan 29 11:14:18.799770 systemd-networkd[1416]: lxc_health: Gained carrier Jan 29 11:14:19.374691 kubelet[2617]: E0129 11:14:19.374655 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:19.424495 kubelet[2617]: I0129 11:14:19.424419 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-t45xw" podStartSLOduration=8.424387114 podStartE2EDuration="8.424387114s" podCreationTimestamp="2025-01-29 11:14:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:14:15.698337041 +0000 UTC m=+88.371057683" watchObservedRunningTime="2025-01-29 11:14:19.424387114 +0000 UTC m=+92.097107756" Jan 29 11:14:19.659769 kubelet[2617]: E0129 11:14:19.659568 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:19.783929 systemd[1]: run-containerd-runc-k8s.io-08b6f4ea3277bb6f7bf336027e0902f39aa3914a9807b0fa8d127535bbc4dcc3-runc.HkMdqG.mount: Deactivated successfully. Jan 29 11:14:20.313488 systemd-networkd[1416]: lxc_health: Gained IPv6LL Jan 29 11:14:20.661214 kubelet[2617]: E0129 11:14:20.661066 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:26.155546 sshd[4431]: Connection closed by 10.0.0.1 port 46024 Jan 29 11:14:26.156330 sshd-session[4429]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:26.160944 systemd[1]: sshd@27-10.0.0.80:22-10.0.0.1:46024.service: Deactivated successfully. Jan 29 11:14:26.163317 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 11:14:26.163986 systemd-logind[1472]: Session 27 logged out. Waiting for processes to exit. Jan 29 11:14:26.164843 systemd-logind[1472]: Removed session 27. Jan 29 11:14:26.416749 kubelet[2617]: E0129 11:14:26.416573 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"