Jan 30 13:44:23.876157 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:44:23.876177 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:44:23.876188 kernel: BIOS-provided physical RAM map: Jan 30 13:44:23.876194 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 30 13:44:23.876200 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 30 13:44:23.876206 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 30 13:44:23.876213 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 30 13:44:23.876219 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 30 13:44:23.876225 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 30 13:44:23.876231 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 30 13:44:23.876239 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 30 13:44:23.876245 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 30 13:44:23.876250 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 30 13:44:23.876257 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 30 13:44:23.876264 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 30 13:44:23.876271 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 30 13:44:23.876280 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 30 13:44:23.876286 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 30 13:44:23.876292 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 30 13:44:23.876299 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 30 13:44:23.876305 kernel: NX (Execute Disable) protection: active Jan 30 13:44:23.876312 kernel: APIC: Static calls initialized Jan 30 13:44:23.876318 kernel: efi: EFI v2.7 by EDK II Jan 30 13:44:23.876324 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Jan 30 13:44:23.876331 kernel: SMBIOS 2.8 present. Jan 30 13:44:23.876337 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 30 13:44:23.876344 kernel: Hypervisor detected: KVM Jan 30 13:44:23.876352 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:44:23.876359 kernel: kvm-clock: using sched offset of 3951772728 cycles Jan 30 13:44:23.876365 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:44:23.876372 kernel: tsc: Detected 2794.750 MHz processor Jan 30 13:44:23.876379 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:44:23.876386 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:44:23.876393 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 30 13:44:23.876400 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 30 13:44:23.876406 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:44:23.876415 kernel: Using GB pages for direct mapping Jan 30 13:44:23.876421 kernel: Secure boot disabled Jan 30 13:44:23.876428 kernel: ACPI: Early table checksum verification disabled Jan 30 13:44:23.876435 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 30 13:44:23.876445 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 30 13:44:23.876452 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:44:23.876459 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:44:23.876468 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 30 13:44:23.876475 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:44:23.876482 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:44:23.876489 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:44:23.876496 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:44:23.876503 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 30 13:44:23.876518 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 30 13:44:23.876528 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 30 13:44:23.876535 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 30 13:44:23.876541 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 30 13:44:23.876548 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 30 13:44:23.876555 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 30 13:44:23.876562 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 30 13:44:23.876568 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 30 13:44:23.876575 kernel: No NUMA configuration found Jan 30 13:44:23.876582 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 30 13:44:23.876589 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 30 13:44:23.876598 kernel: Zone ranges: Jan 30 13:44:23.876605 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:44:23.876611 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 30 13:44:23.876618 kernel: Normal empty Jan 30 13:44:23.876625 kernel: Movable zone start for each node Jan 30 13:44:23.876632 kernel: Early memory node ranges Jan 30 13:44:23.876638 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 30 13:44:23.876645 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 30 13:44:23.876652 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 30 13:44:23.876661 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 30 13:44:23.876668 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 30 13:44:23.876674 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 30 13:44:23.876681 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 30 13:44:23.876688 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:44:23.876695 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 30 13:44:23.876702 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 30 13:44:23.876708 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:44:23.876715 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 30 13:44:23.876724 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 30 13:44:23.876731 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 30 13:44:23.876738 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 13:44:23.876745 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:44:23.876752 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:44:23.876759 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:44:23.876765 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:44:23.876772 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:44:23.876779 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:44:23.876786 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:44:23.876795 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:44:23.876802 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:44:23.876809 kernel: TSC deadline timer available Jan 30 13:44:23.876816 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 30 13:44:23.876823 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:44:23.876830 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 30 13:44:23.876836 kernel: kvm-guest: setup PV sched yield Jan 30 13:44:23.876843 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 30 13:44:23.876850 kernel: Booting paravirtualized kernel on KVM Jan 30 13:44:23.876859 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:44:23.876866 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 30 13:44:23.876873 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 30 13:44:23.876880 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 30 13:44:23.876886 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 30 13:44:23.876893 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:44:23.876900 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:44:23.876908 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:44:23.876917 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:44:23.876924 kernel: random: crng init done Jan 30 13:44:23.876931 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:44:23.876938 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:44:23.876945 kernel: Fallback order for Node 0: 0 Jan 30 13:44:23.876952 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 30 13:44:23.876959 kernel: Policy zone: DMA32 Jan 30 13:44:23.876966 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:44:23.876973 kernel: Memory: 2395612K/2567000K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 171128K reserved, 0K cma-reserved) Jan 30 13:44:23.876982 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 13:44:23.876989 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:44:23.876996 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:44:23.877003 kernel: Dynamic Preempt: voluntary Jan 30 13:44:23.877016 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:44:23.877026 kernel: rcu: RCU event tracing is enabled. Jan 30 13:44:23.877033 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 13:44:23.877059 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:44:23.877066 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:44:23.877073 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:44:23.877081 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:44:23.877088 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 13:44:23.877097 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 30 13:44:23.877104 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:44:23.877112 kernel: Console: colour dummy device 80x25 Jan 30 13:44:23.877119 kernel: printk: console [ttyS0] enabled Jan 30 13:44:23.877126 kernel: ACPI: Core revision 20230628 Jan 30 13:44:23.877135 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 13:44:23.877142 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:44:23.877150 kernel: x2apic enabled Jan 30 13:44:23.877157 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:44:23.877164 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 30 13:44:23.877171 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 30 13:44:23.877178 kernel: kvm-guest: setup PV IPIs Jan 30 13:44:23.877186 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:44:23.877193 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 13:44:23.877202 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jan 30 13:44:23.877209 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 13:44:23.877216 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 30 13:44:23.877224 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 30 13:44:23.877231 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:44:23.877238 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:44:23.877245 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:44:23.877252 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:44:23.877259 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 30 13:44:23.877269 kernel: RETBleed: Mitigation: untrained return thunk Jan 30 13:44:23.877276 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:44:23.877283 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:44:23.877290 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 30 13:44:23.877298 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 30 13:44:23.877305 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 30 13:44:23.877313 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:44:23.877320 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:44:23.877329 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:44:23.877336 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:44:23.877343 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 30 13:44:23.877351 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:44:23.877358 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:44:23.877365 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:44:23.877372 kernel: landlock: Up and running. Jan 30 13:44:23.877379 kernel: SELinux: Initializing. Jan 30 13:44:23.877386 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:44:23.877395 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:44:23.877403 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 30 13:44:23.877410 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:44:23.877417 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:44:23.877424 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:44:23.877432 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 30 13:44:23.877439 kernel: ... version: 0 Jan 30 13:44:23.877446 kernel: ... bit width: 48 Jan 30 13:44:23.877453 kernel: ... generic registers: 6 Jan 30 13:44:23.877462 kernel: ... value mask: 0000ffffffffffff Jan 30 13:44:23.877469 kernel: ... max period: 00007fffffffffff Jan 30 13:44:23.877476 kernel: ... fixed-purpose events: 0 Jan 30 13:44:23.877483 kernel: ... event mask: 000000000000003f Jan 30 13:44:23.877490 kernel: signal: max sigframe size: 1776 Jan 30 13:44:23.877497 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:44:23.877505 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:44:23.877519 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:44:23.877526 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:44:23.877535 kernel: .... node #0, CPUs: #1 #2 #3 Jan 30 13:44:23.877542 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 13:44:23.877549 kernel: smpboot: Max logical packages: 1 Jan 30 13:44:23.877557 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jan 30 13:44:23.877564 kernel: devtmpfs: initialized Jan 30 13:44:23.877571 kernel: x86/mm: Memory block size: 128MB Jan 30 13:44:23.877578 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 30 13:44:23.877585 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 30 13:44:23.877592 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 30 13:44:23.877602 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 30 13:44:23.877609 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 30 13:44:23.877616 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:44:23.877623 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 13:44:23.877631 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:44:23.877638 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:44:23.877645 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:44:23.877652 kernel: audit: type=2000 audit(1738244663.544:1): state=initialized audit_enabled=0 res=1 Jan 30 13:44:23.877659 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:44:23.877669 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:44:23.877676 kernel: cpuidle: using governor menu Jan 30 13:44:23.877683 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:44:23.877690 kernel: dca service started, version 1.12.1 Jan 30 13:44:23.877697 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 30 13:44:23.877705 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 30 13:44:23.877712 kernel: PCI: Using configuration type 1 for base access Jan 30 13:44:23.877719 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:44:23.877726 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:44:23.877736 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:44:23.877743 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:44:23.877750 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:44:23.877757 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:44:23.877765 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:44:23.877772 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:44:23.877779 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:44:23.877786 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:44:23.877793 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:44:23.877802 kernel: ACPI: Interpreter enabled Jan 30 13:44:23.877810 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 13:44:23.877817 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:44:23.877824 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:44:23.877831 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:44:23.877838 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 13:44:23.877845 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:44:23.878013 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:44:23.878161 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 30 13:44:23.878282 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 30 13:44:23.878292 kernel: PCI host bridge to bus 0000:00 Jan 30 13:44:23.878427 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:44:23.878548 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:44:23.878659 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:44:23.878769 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 30 13:44:23.878884 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 13:44:23.878993 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 30 13:44:23.879197 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:44:23.879365 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 13:44:23.879495 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 30 13:44:23.879625 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 30 13:44:23.879750 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 30 13:44:23.879869 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 30 13:44:23.879987 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 30 13:44:23.880135 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:44:23.880268 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:44:23.880462 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 30 13:44:23.880635 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 30 13:44:23.880809 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 30 13:44:23.880943 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:44:23.881098 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 30 13:44:23.881223 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 30 13:44:23.881344 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 30 13:44:23.881479 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:44:23.881612 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 30 13:44:23.881738 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 30 13:44:23.881858 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 30 13:44:23.881990 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 30 13:44:23.882171 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 13:44:23.882295 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 13:44:23.882423 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 13:44:23.882552 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 30 13:44:23.882677 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 30 13:44:23.882811 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 13:44:23.882930 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 30 13:44:23.882940 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:44:23.882948 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:44:23.882955 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:44:23.882963 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:44:23.882973 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 13:44:23.882981 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 13:44:23.882988 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 13:44:23.882995 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 13:44:23.883002 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 13:44:23.883009 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 13:44:23.883016 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 13:44:23.883024 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 13:44:23.883031 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 13:44:23.883053 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 13:44:23.883060 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 13:44:23.883067 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 13:44:23.883074 kernel: iommu: Default domain type: Translated Jan 30 13:44:23.883081 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:44:23.883088 kernel: efivars: Registered efivars operations Jan 30 13:44:23.883096 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:44:23.883103 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:44:23.883110 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 30 13:44:23.883119 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 30 13:44:23.883126 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 30 13:44:23.883133 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 30 13:44:23.883254 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 13:44:23.883399 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 13:44:23.883537 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:44:23.883548 kernel: vgaarb: loaded Jan 30 13:44:23.883555 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 13:44:23.883562 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 13:44:23.883573 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:44:23.883580 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:44:23.883588 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:44:23.883596 kernel: pnp: PnP ACPI init Jan 30 13:44:23.883726 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 30 13:44:23.883736 kernel: pnp: PnP ACPI: found 6 devices Jan 30 13:44:23.883744 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:44:23.883751 kernel: NET: Registered PF_INET protocol family Jan 30 13:44:23.883761 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:44:23.883768 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:44:23.883776 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:44:23.883783 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:44:23.883790 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:44:23.883798 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:44:23.883805 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:44:23.883812 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:44:23.883820 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:44:23.883829 kernel: NET: Registered PF_XDP protocol family Jan 30 13:44:23.883950 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 30 13:44:23.884139 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 30 13:44:23.884262 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:44:23.884372 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:44:23.884480 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:44:23.884598 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 30 13:44:23.884706 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 30 13:44:23.884818 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 30 13:44:23.884828 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:44:23.884835 kernel: Initialise system trusted keyrings Jan 30 13:44:23.884843 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:44:23.884850 kernel: Key type asymmetric registered Jan 30 13:44:23.884857 kernel: Asymmetric key parser 'x509' registered Jan 30 13:44:23.884864 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:44:23.884871 kernel: io scheduler mq-deadline registered Jan 30 13:44:23.884878 kernel: io scheduler kyber registered Jan 30 13:44:23.884888 kernel: io scheduler bfq registered Jan 30 13:44:23.884905 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:44:23.884913 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 13:44:23.884927 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 13:44:23.884935 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 30 13:44:23.884943 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:44:23.884950 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:44:23.884957 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:44:23.884964 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:44:23.884974 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:44:23.884981 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:44:23.885125 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 30 13:44:23.885240 kernel: rtc_cmos 00:04: registered as rtc0 Jan 30 13:44:23.885352 kernel: rtc_cmos 00:04: setting system clock to 2025-01-30T13:44:23 UTC (1738244663) Jan 30 13:44:23.885463 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 30 13:44:23.885472 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 13:44:23.885483 kernel: efifb: probing for efifb Jan 30 13:44:23.885490 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 30 13:44:23.885497 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 30 13:44:23.885505 kernel: efifb: scrolling: redraw Jan 30 13:44:23.885519 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 30 13:44:23.885527 kernel: Console: switching to colour frame buffer device 100x37 Jan 30 13:44:23.885550 kernel: fb0: EFI VGA frame buffer device Jan 30 13:44:23.885559 kernel: pstore: Using crash dump compression: deflate Jan 30 13:44:23.885567 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 13:44:23.885576 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:44:23.885583 kernel: Segment Routing with IPv6 Jan 30 13:44:23.885591 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:44:23.885598 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:44:23.885605 kernel: Key type dns_resolver registered Jan 30 13:44:23.885613 kernel: IPI shorthand broadcast: enabled Jan 30 13:44:23.885620 kernel: sched_clock: Marking stable (590002565, 114069801)->(721289738, -17217372) Jan 30 13:44:23.885627 kernel: registered taskstats version 1 Jan 30 13:44:23.885635 kernel: Loading compiled-in X.509 certificates Jan 30 13:44:23.885642 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:44:23.885652 kernel: Key type .fscrypt registered Jan 30 13:44:23.885659 kernel: Key type fscrypt-provisioning registered Jan 30 13:44:23.885666 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:44:23.885674 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:44:23.885681 kernel: ima: No architecture policies found Jan 30 13:44:23.885688 kernel: clk: Disabling unused clocks Jan 30 13:44:23.885696 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:44:23.885703 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:44:23.885713 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:44:23.885720 kernel: Run /init as init process Jan 30 13:44:23.885727 kernel: with arguments: Jan 30 13:44:23.885735 kernel: /init Jan 30 13:44:23.885742 kernel: with environment: Jan 30 13:44:23.885749 kernel: HOME=/ Jan 30 13:44:23.885756 kernel: TERM=linux Jan 30 13:44:23.885764 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:44:23.885773 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:44:23.885785 systemd[1]: Detected virtualization kvm. Jan 30 13:44:23.885795 systemd[1]: Detected architecture x86-64. Jan 30 13:44:23.885803 systemd[1]: Running in initrd. Jan 30 13:44:23.885813 systemd[1]: No hostname configured, using default hostname. Jan 30 13:44:23.885823 systemd[1]: Hostname set to . Jan 30 13:44:23.885831 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:44:23.885838 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:44:23.885846 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:44:23.885854 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:44:23.885863 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:44:23.885871 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:44:23.885879 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:44:23.885889 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:44:23.885899 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:44:23.885907 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:44:23.885915 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:44:23.885923 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:44:23.885931 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:44:23.885938 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:44:23.885948 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:44:23.885956 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:44:23.885964 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:44:23.885972 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:44:23.885980 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:44:23.885988 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:44:23.885996 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:44:23.886004 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:44:23.886014 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:44:23.886022 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:44:23.886029 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:44:23.886050 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:44:23.886058 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:44:23.886066 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:44:23.886073 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:44:23.886081 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:44:23.886089 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:44:23.886099 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:44:23.886107 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:44:23.886115 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:44:23.886141 systemd-journald[192]: Collecting audit messages is disabled. Jan 30 13:44:23.886161 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:44:23.886170 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:44:23.886178 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:44:23.886186 systemd-journald[192]: Journal started Jan 30 13:44:23.886206 systemd-journald[192]: Runtime Journal (/run/log/journal/c74c28b50ad948c595019c13449e9e3f) is 6.0M, max 48.3M, 42.2M free. Jan 30 13:44:23.880831 systemd-modules-load[194]: Inserted module 'overlay' Jan 30 13:44:23.888058 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:44:23.901201 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:44:23.903004 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:44:23.904109 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:44:23.915501 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:44:23.917773 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:44:23.924062 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:44:23.924196 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:44:23.925116 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:44:23.930802 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 30 13:44:23.931735 kernel: Bridge firewalling registered Jan 30 13:44:23.933238 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:44:23.935301 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:44:23.943283 dracut-cmdline[222]: dracut-dracut-053 Jan 30 13:44:23.947022 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:44:23.952355 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:44:23.960194 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:44:23.992399 systemd-resolved[247]: Positive Trust Anchors: Jan 30 13:44:23.992417 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:44:23.992448 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:44:23.997811 systemd-resolved[247]: Defaulting to hostname 'linux'. Jan 30 13:44:23.998931 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:44:24.002397 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:44:24.040062 kernel: SCSI subsystem initialized Jan 30 13:44:24.049062 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:44:24.060087 kernel: iscsi: registered transport (tcp) Jan 30 13:44:24.083084 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:44:24.083117 kernel: QLogic iSCSI HBA Driver Jan 30 13:44:24.136580 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:44:24.148196 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:44:24.172269 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:44:24.172324 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:44:24.173307 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:44:24.216065 kernel: raid6: avx2x4 gen() 30604 MB/s Jan 30 13:44:24.233059 kernel: raid6: avx2x2 gen() 31407 MB/s Jan 30 13:44:24.250157 kernel: raid6: avx2x1 gen() 26008 MB/s Jan 30 13:44:24.250171 kernel: raid6: using algorithm avx2x2 gen() 31407 MB/s Jan 30 13:44:24.268153 kernel: raid6: .... xor() 19987 MB/s, rmw enabled Jan 30 13:44:24.268168 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:44:24.288063 kernel: xor: automatically using best checksumming function avx Jan 30 13:44:24.439068 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:44:24.453036 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:44:24.462237 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:44:24.473843 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jan 30 13:44:24.478412 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:44:24.482152 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:44:24.498101 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Jan 30 13:44:24.530239 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:44:24.544180 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:44:24.605395 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:44:24.617315 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:44:24.632204 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:44:24.634628 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:44:24.637146 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:44:24.638365 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:44:24.645071 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 30 13:44:24.673140 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:44:24.673156 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 13:44:24.673303 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:44:24.673314 kernel: GPT:9289727 != 19775487 Jan 30 13:44:24.673325 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:44:24.673341 kernel: GPT:9289727 != 19775487 Jan 30 13:44:24.673350 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:44:24.673360 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:44:24.655242 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:44:24.663751 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:44:24.677063 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:44:24.663901 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:44:24.683150 kernel: AES CTR mode by8 optimization enabled Jan 30 13:44:24.683174 kernel: libata version 3.00 loaded. Jan 30 13:44:24.674907 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:44:24.676243 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:44:24.676397 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:44:24.691204 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 13:44:24.718892 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 13:44:24.718925 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 13:44:24.719137 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 13:44:24.719332 kernel: scsi host0: ahci Jan 30 13:44:24.719534 kernel: scsi host1: ahci Jan 30 13:44:24.719708 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (467) Jan 30 13:44:24.719720 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (460) Jan 30 13:44:24.719730 kernel: scsi host2: ahci Jan 30 13:44:24.719890 kernel: scsi host3: ahci Jan 30 13:44:24.720036 kernel: scsi host4: ahci Jan 30 13:44:24.720243 kernel: scsi host5: ahci Jan 30 13:44:24.721685 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 30 13:44:24.721697 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 30 13:44:24.721707 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 30 13:44:24.721717 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 30 13:44:24.721732 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 30 13:44:24.721741 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 30 13:44:24.679027 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:44:24.693581 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:44:24.694589 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:44:24.720262 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:44:24.730787 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:44:24.741560 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:44:24.746954 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:44:24.748373 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:44:24.755521 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:44:24.769161 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:44:24.770469 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:44:24.770530 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:44:24.773125 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:44:24.780431 disk-uuid[554]: Primary Header is updated. Jan 30 13:44:24.780431 disk-uuid[554]: Secondary Entries is updated. Jan 30 13:44:24.780431 disk-uuid[554]: Secondary Header is updated. Jan 30 13:44:24.783519 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:44:24.775139 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:44:24.786075 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:44:24.793284 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:44:24.803167 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:44:24.824355 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:44:25.027393 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 13:44:25.027462 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 13:44:25.027472 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 30 13:44:25.029065 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 13:44:25.029150 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 13:44:25.030076 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 30 13:44:25.031080 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 30 13:44:25.031099 kernel: ata3.00: applying bridge limits Jan 30 13:44:25.032110 kernel: ata3.00: configured for UDMA/100 Jan 30 13:44:25.033068 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 13:44:25.083624 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 30 13:44:25.096268 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:44:25.096285 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 30 13:44:25.787779 disk-uuid[556]: The operation has completed successfully. Jan 30 13:44:25.789283 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:44:25.814608 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:44:25.814732 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:44:25.845168 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:44:25.848251 sh[595]: Success Jan 30 13:44:25.860065 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 30 13:44:25.891296 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:44:25.908478 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:44:25.910894 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:44:25.922718 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:44:25.922750 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:44:25.922760 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:44:25.924492 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:44:25.924511 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:44:25.930021 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:44:25.930273 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:44:25.939153 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:44:25.939864 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:44:25.951774 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:44:25.951804 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:44:25.951816 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:44:25.955110 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:44:25.963453 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:44:25.965294 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:44:25.973602 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:44:25.979214 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:44:26.026121 ignition[689]: Ignition 2.19.0 Jan 30 13:44:26.026132 ignition[689]: Stage: fetch-offline Jan 30 13:44:26.026168 ignition[689]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:44:26.026177 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:44:26.026270 ignition[689]: parsed url from cmdline: "" Jan 30 13:44:26.026274 ignition[689]: no config URL provided Jan 30 13:44:26.026279 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:44:26.026288 ignition[689]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:44:26.026316 ignition[689]: op(1): [started] loading QEMU firmware config module Jan 30 13:44:26.026322 ignition[689]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 13:44:26.035771 ignition[689]: op(1): [finished] loading QEMU firmware config module Jan 30 13:44:26.066725 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:44:26.076878 ignition[689]: parsing config with SHA512: 185a7664f06501e1805e2b022d3e4e8bffd0ec47f11e53c18dd82e4e118eda9f9e26ec7f2768de0394f11f258bcc113067b709a6f8972776961f1cc95cad27c0 Jan 30 13:44:26.078192 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:44:26.080687 unknown[689]: fetched base config from "system" Jan 30 13:44:26.080700 unknown[689]: fetched user config from "qemu" Jan 30 13:44:26.082285 ignition[689]: fetch-offline: fetch-offline passed Jan 30 13:44:26.082361 ignition[689]: Ignition finished successfully Jan 30 13:44:26.086757 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:44:26.100427 systemd-networkd[783]: lo: Link UP Jan 30 13:44:26.100438 systemd-networkd[783]: lo: Gained carrier Jan 30 13:44:26.101969 systemd-networkd[783]: Enumeration completed Jan 30 13:44:26.102056 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:44:26.102361 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:44:26.102366 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:44:26.103712 systemd-networkd[783]: eth0: Link UP Jan 30 13:44:26.103715 systemd-networkd[783]: eth0: Gained carrier Jan 30 13:44:26.103722 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:44:26.104356 systemd[1]: Reached target network.target - Network. Jan 30 13:44:26.105373 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:44:26.113211 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:44:26.115088 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:44:26.126880 ignition[786]: Ignition 2.19.0 Jan 30 13:44:26.126890 ignition[786]: Stage: kargs Jan 30 13:44:26.127063 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:44:26.127075 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:44:26.127873 ignition[786]: kargs: kargs passed Jan 30 13:44:26.131091 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:44:26.127913 ignition[786]: Ignition finished successfully Jan 30 13:44:26.146290 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:44:26.157760 ignition[795]: Ignition 2.19.0 Jan 30 13:44:26.157772 ignition[795]: Stage: disks Jan 30 13:44:26.157944 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:44:26.157956 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:44:26.161031 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:44:26.158792 ignition[795]: disks: disks passed Jan 30 13:44:26.162595 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:44:26.158838 ignition[795]: Ignition finished successfully Jan 30 13:44:26.164498 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:44:26.164570 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:44:26.164752 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:44:26.165254 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:44:26.179164 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:44:26.190933 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:44:26.197375 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:44:26.205132 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:44:26.289074 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:44:26.289601 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:44:26.291828 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:44:26.305118 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:44:26.306093 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:44:26.307878 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:44:26.307916 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:44:26.307935 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:44:26.318815 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:44:26.321658 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:44:26.324463 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (814) Jan 30 13:44:26.326487 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:44:26.326507 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:44:26.326517 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:44:26.330050 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:44:26.331640 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:44:26.358317 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:44:26.362815 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:44:26.367099 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:44:26.370994 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:44:26.456647 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:44:26.475124 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:44:26.475885 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:44:26.487068 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:44:26.500722 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:44:26.509551 ignition[928]: INFO : Ignition 2.19.0 Jan 30 13:44:26.509551 ignition[928]: INFO : Stage: mount Jan 30 13:44:26.511197 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:44:26.511197 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:44:26.511197 ignition[928]: INFO : mount: mount passed Jan 30 13:44:26.511197 ignition[928]: INFO : Ignition finished successfully Jan 30 13:44:26.516901 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:44:26.530142 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:44:26.921987 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:44:26.935177 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:44:26.943702 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (942) Jan 30 13:44:26.943731 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:44:26.943742 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:44:26.945189 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:44:26.948069 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:44:26.949482 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:44:26.969792 ignition[959]: INFO : Ignition 2.19.0 Jan 30 13:44:26.969792 ignition[959]: INFO : Stage: files Jan 30 13:44:26.971664 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:44:26.971664 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:44:26.971664 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:44:26.975397 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:44:26.975397 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:44:26.975397 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:44:26.975397 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:44:26.975397 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:44:26.975186 unknown[959]: wrote ssh authorized keys file for user: core Jan 30 13:44:26.983272 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 13:44:26.983272 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 30 13:44:27.009983 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:44:27.086353 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 13:44:27.086353 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:44:27.090327 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 30 13:44:27.590227 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:44:27.687277 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:44:27.689259 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:44:27.689259 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:44:27.689259 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:44:27.689259 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:44:27.689259 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:44:27.689259 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:44:27.689259 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:44:27.689259 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:44:27.689259 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:44:27.689259 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:44:27.689259 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:44:27.689259 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:44:27.689259 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:44:27.689259 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 30 13:44:28.052196 systemd-networkd[783]: eth0: Gained IPv6LL Jan 30 13:44:28.116073 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 13:44:28.524472 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:44:28.524472 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 13:44:28.528239 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:44:28.528239 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:44:28.528239 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 13:44:28.528239 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 30 13:44:28.528239 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:44:28.528239 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:44:28.528239 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 30 13:44:28.528239 ignition[959]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 13:44:28.549589 ignition[959]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:44:28.555981 ignition[959]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:44:28.557631 ignition[959]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 13:44:28.557631 ignition[959]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:44:28.557631 ignition[959]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:44:28.557631 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:44:28.557631 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:44:28.557631 ignition[959]: INFO : files: files passed Jan 30 13:44:28.557631 ignition[959]: INFO : Ignition finished successfully Jan 30 13:44:28.559458 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:44:28.570290 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:44:28.573063 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:44:28.574899 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:44:28.575005 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:44:28.584099 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 13:44:28.587193 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:44:28.587193 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:44:28.590406 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:44:28.590097 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:44:28.591813 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:44:28.605189 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:44:28.632838 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:44:28.632968 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:44:28.635327 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:44:28.637432 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:44:28.639483 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:44:28.640281 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:44:28.657984 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:44:28.659391 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:44:28.672549 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:44:28.672700 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:44:28.673074 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:44:28.710594 ignition[1013]: INFO : Ignition 2.19.0 Jan 30 13:44:28.710594 ignition[1013]: INFO : Stage: umount Jan 30 13:44:28.710594 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:44:28.710594 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:44:28.710594 ignition[1013]: INFO : umount: umount passed Jan 30 13:44:28.710594 ignition[1013]: INFO : Ignition finished successfully Jan 30 13:44:28.673381 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:44:28.673493 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:44:28.674089 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:44:28.674417 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:44:28.674749 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:44:28.675108 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:44:28.675438 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:44:28.675778 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:44:28.676124 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:44:28.676464 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:44:28.676796 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:44:28.677304 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:44:28.677618 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:44:28.677726 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:44:28.678328 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:44:28.678681 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:44:28.678976 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:44:28.679100 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:44:28.679506 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:44:28.679611 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:44:28.680327 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:44:28.680442 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:44:28.680750 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:44:28.680999 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:44:28.685085 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:44:28.685368 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:44:28.685702 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:44:28.686055 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:44:28.686148 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:44:28.686570 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:44:28.686657 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:44:28.687087 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:44:28.687195 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:44:28.687583 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:44:28.687683 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:44:28.688783 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:44:28.688998 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:44:28.689122 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:44:28.690049 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:44:28.690335 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:44:28.690446 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:44:28.690748 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:44:28.690842 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:44:28.695566 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:44:28.695677 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:44:28.706946 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:44:28.709093 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:44:28.710821 systemd[1]: Stopped target network.target - Network. Jan 30 13:44:28.712506 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:44:28.712568 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:44:28.714333 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:44:28.714381 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:44:28.716330 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:44:28.716376 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:44:28.718622 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:44:28.718686 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:44:28.720833 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:44:28.722696 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:44:28.725822 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:44:28.726072 systemd-networkd[783]: eth0: DHCPv6 lease lost Jan 30 13:44:28.728085 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:44:28.728206 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:44:28.729745 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:44:28.729784 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:44:28.740132 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:44:28.742088 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:44:28.742145 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:44:28.744546 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:44:28.747332 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:44:28.747458 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:44:28.755844 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:44:28.755939 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:44:28.757452 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:44:28.757505 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:44:28.759612 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:44:28.759666 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:44:28.762752 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:44:28.762878 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:44:28.765711 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:44:28.765876 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:44:28.767567 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:44:28.767616 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:44:28.769375 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:44:28.769422 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:44:28.771286 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:44:28.771333 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:44:28.773369 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:44:28.773426 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:44:28.775510 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:44:28.775557 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:44:28.787208 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:44:28.789294 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:44:28.789355 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:44:28.791525 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:44:28.791574 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:44:28.795580 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:44:28.795699 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:44:28.906089 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:44:28.906217 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:44:28.908279 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:44:28.909936 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:44:28.909987 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:44:28.921180 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:44:28.927671 systemd[1]: Switching root. Jan 30 13:44:28.961415 systemd-journald[192]: Journal stopped Jan 30 13:44:30.132333 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 30 13:44:30.132718 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:44:30.132733 kernel: SELinux: policy capability open_perms=1 Jan 30 13:44:30.132744 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:44:30.132755 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:44:30.132766 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:44:30.132781 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:44:30.132791 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:44:30.132802 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:44:30.132821 kernel: audit: type=1403 audit(1738244669.403:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:44:30.132833 systemd[1]: Successfully loaded SELinux policy in 41.695ms. Jan 30 13:44:30.132855 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.606ms. Jan 30 13:44:30.132868 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:44:30.132880 systemd[1]: Detected virtualization kvm. Jan 30 13:44:30.132892 systemd[1]: Detected architecture x86-64. Jan 30 13:44:30.132906 systemd[1]: Detected first boot. Jan 30 13:44:30.132918 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:44:30.132929 zram_generator::config[1057]: No configuration found. Jan 30 13:44:30.132942 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:44:30.132954 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:44:30.132968 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:44:30.132980 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:44:30.132997 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:44:30.133015 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:44:30.133027 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:44:30.133058 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:44:30.133071 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:44:30.133083 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:44:30.133095 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:44:30.133107 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:44:30.133119 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:44:30.133130 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:44:30.133145 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:44:30.133157 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:44:30.133169 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:44:30.133181 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:44:30.133192 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:44:30.133204 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:44:30.133216 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:44:30.133227 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:44:30.133239 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:44:30.133260 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:44:30.133272 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:44:30.133283 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:44:30.133295 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:44:30.133307 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:44:30.133319 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:44:30.133331 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:44:30.133345 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:44:30.133357 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:44:30.133396 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:44:30.133409 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:44:30.133458 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:44:30.133511 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:44:30.133523 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:44:30.133543 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:44:30.133554 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:44:30.133570 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:44:30.133582 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:44:30.133595 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:44:30.133607 systemd[1]: Reached target machines.target - Containers. Jan 30 13:44:30.133618 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:44:30.133632 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:44:30.133645 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:44:30.133656 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:44:30.133668 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:44:30.133683 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:44:30.133695 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:44:30.133706 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:44:30.133718 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:44:30.133730 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:44:30.133742 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:44:30.133754 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:44:30.133765 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:44:30.133779 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:44:30.133791 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:44:30.133803 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:44:30.133815 kernel: fuse: init (API version 7.39) Jan 30 13:44:30.133827 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:44:30.133838 kernel: loop: module loaded Jan 30 13:44:30.133850 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:44:30.133862 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:44:30.133874 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:44:30.133890 systemd[1]: Stopped verity-setup.service. Jan 30 13:44:30.133902 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:44:30.133914 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:44:30.133926 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:44:30.133970 systemd-journald[1127]: Collecting audit messages is disabled. Jan 30 13:44:30.134028 kernel: ACPI: bus type drm_connector registered Jan 30 13:44:30.134331 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:44:30.134345 systemd-journald[1127]: Journal started Jan 30 13:44:30.134376 systemd-journald[1127]: Runtime Journal (/run/log/journal/c74c28b50ad948c595019c13449e9e3f) is 6.0M, max 48.3M, 42.2M free. Jan 30 13:44:29.913773 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:44:29.928658 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:44:29.929118 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:44:30.136383 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:44:30.137181 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:44:30.138481 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:44:30.139756 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:44:30.141075 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:44:30.142609 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:44:30.144199 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:44:30.144382 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:44:30.146014 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:44:30.146207 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:44:30.147785 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:44:30.147958 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:44:30.149380 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:44:30.149550 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:44:30.151167 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:44:30.151335 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:44:30.152751 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:44:30.152916 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:44:30.154432 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:44:30.156020 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:44:30.157576 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:44:30.173496 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:44:30.183108 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:44:30.185541 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:44:30.186821 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:44:30.186914 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:44:30.188991 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:44:30.191329 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:44:30.193656 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:44:30.194918 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:44:30.197789 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:44:30.200520 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:44:30.201708 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:44:30.204596 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:44:30.205796 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:44:30.208587 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:44:30.213344 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:44:30.219850 systemd-journald[1127]: Time spent on flushing to /var/log/journal/c74c28b50ad948c595019c13449e9e3f is 17.076ms for 998 entries. Jan 30 13:44:30.219850 systemd-journald[1127]: System Journal (/var/log/journal/c74c28b50ad948c595019c13449e9e3f) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:44:30.250837 systemd-journald[1127]: Received client request to flush runtime journal. Jan 30 13:44:30.250869 kernel: loop0: detected capacity change from 0 to 142488 Jan 30 13:44:30.218547 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:44:30.222691 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:44:30.224165 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:44:30.226003 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:44:30.234299 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:44:30.236106 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:44:30.242618 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:44:30.253130 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:44:30.256987 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:44:30.259473 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:44:30.261420 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:44:30.282132 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:44:30.284591 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 13:44:30.285384 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:44:30.296995 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:44:30.299105 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:44:30.299813 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:44:30.310130 kernel: loop1: detected capacity change from 0 to 218376 Jan 30 13:44:30.326411 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Jan 30 13:44:30.326432 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Jan 30 13:44:30.332572 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:44:30.343061 kernel: loop2: detected capacity change from 0 to 140768 Jan 30 13:44:30.383069 kernel: loop3: detected capacity change from 0 to 142488 Jan 30 13:44:30.394054 kernel: loop4: detected capacity change from 0 to 218376 Jan 30 13:44:30.404058 kernel: loop5: detected capacity change from 0 to 140768 Jan 30 13:44:30.413892 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 13:44:30.414506 (sd-merge)[1196]: Merged extensions into '/usr'. Jan 30 13:44:30.419362 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:44:30.419377 systemd[1]: Reloading... Jan 30 13:44:30.477067 zram_generator::config[1221]: No configuration found. Jan 30 13:44:30.529104 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:44:30.601289 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:44:30.650458 systemd[1]: Reloading finished in 230 ms. Jan 30 13:44:30.682054 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:44:30.683577 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:44:30.696298 systemd[1]: Starting ensure-sysext.service... Jan 30 13:44:30.701178 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:44:30.704537 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:44:30.704561 systemd[1]: Reloading... Jan 30 13:44:30.721173 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:44:30.721551 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:44:30.722525 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:44:30.722811 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 30 13:44:30.722891 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 30 13:44:30.726172 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:44:30.726181 systemd-tmpfiles[1260]: Skipping /boot Jan 30 13:44:30.739867 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:44:30.739991 systemd-tmpfiles[1260]: Skipping /boot Jan 30 13:44:30.766068 zram_generator::config[1289]: No configuration found. Jan 30 13:44:30.864589 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:44:30.913873 systemd[1]: Reloading finished in 208 ms. Jan 30 13:44:30.934530 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:44:30.946428 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:44:30.955003 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:44:30.957429 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:44:30.959746 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:44:30.966261 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:44:30.969479 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:44:30.972392 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:44:30.976417 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:44:30.976584 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:44:30.979235 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:44:30.987067 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:44:30.993454 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:44:30.994733 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:44:30.997981 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:44:30.999204 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:44:31.000387 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:44:31.002451 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:44:31.002624 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:44:31.004622 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:44:31.004795 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:44:31.006717 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:44:31.006888 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:44:31.008212 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Jan 30 13:44:31.015654 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:44:31.015863 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:44:31.021913 augenrules[1355]: No rules Jan 30 13:44:31.022957 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:44:31.025856 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:44:31.028205 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:44:31.029538 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:44:31.031595 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:44:31.032756 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:44:31.033862 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:44:31.036114 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:44:31.038141 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:44:31.038565 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:44:31.040597 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:44:31.040781 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:44:31.042440 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:44:31.042607 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:44:31.044487 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:44:31.046547 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:44:31.058577 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:44:31.071776 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:44:31.071969 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:44:31.078269 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:44:31.081160 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:44:31.084368 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:44:31.087614 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:44:31.088778 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:44:31.091896 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:44:31.092989 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:44:31.094011 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:44:31.096294 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:44:31.096478 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:44:31.098392 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:44:31.098565 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:44:31.101908 systemd[1]: Finished ensure-sysext.service. Jan 30 13:44:31.103084 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1364) Jan 30 13:44:31.118933 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:44:31.119165 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:44:31.121055 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:44:31.123261 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:44:31.135003 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:44:31.140460 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:44:31.140537 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:44:31.148221 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:44:31.149426 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:44:31.156468 systemd-resolved[1329]: Positive Trust Anchors: Jan 30 13:44:31.156489 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:44:31.156522 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:44:31.162922 systemd-resolved[1329]: Defaulting to hostname 'linux'. Jan 30 13:44:31.167394 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:44:31.177267 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:44:31.178538 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:44:31.180246 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:44:31.185356 systemd-networkd[1398]: lo: Link UP Jan 30 13:44:31.185369 systemd-networkd[1398]: lo: Gained carrier Jan 30 13:44:31.187020 systemd-networkd[1398]: Enumeration completed Jan 30 13:44:31.188137 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:44:31.189446 systemd[1]: Reached target network.target - Network. Jan 30 13:44:31.191481 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:44:31.191551 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:44:31.192375 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:44:31.192466 systemd-networkd[1398]: eth0: Link UP Jan 30 13:44:31.192507 systemd-networkd[1398]: eth0: Gained carrier Jan 30 13:44:31.192550 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:44:31.195085 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 13:44:31.196194 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:44:31.198484 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:44:31.202105 systemd-networkd[1398]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:44:31.207173 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:44:31.208198 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:44:31.839483 systemd-resolved[1329]: Clock change detected. Flushing caches. Jan 30 13:44:31.839530 systemd-timesyncd[1410]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 13:44:31.839573 systemd-timesyncd[1410]: Initial clock synchronization to Thu 2025-01-30 13:44:31.839446 UTC. Jan 30 13:44:31.844471 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:44:31.850040 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 30 13:44:31.861816 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 13:44:31.862868 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 13:44:31.862885 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 13:44:31.863096 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 13:44:31.876951 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:44:31.878706 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:44:31.880181 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:44:31.880382 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:44:31.884995 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:44:31.942312 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:44:31.976110 kernel: kvm_amd: TSC scaling supported Jan 30 13:44:31.976149 kernel: kvm_amd: Nested Virtualization enabled Jan 30 13:44:31.976163 kernel: kvm_amd: Nested Paging enabled Jan 30 13:44:31.977091 kernel: kvm_amd: LBR virtualization supported Jan 30 13:44:31.977121 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 30 13:44:31.977649 kernel: kvm_amd: Virtual GIF supported Jan 30 13:44:31.997644 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:44:32.026954 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:44:32.041769 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:44:32.049876 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:44:32.079689 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:44:32.081261 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:44:32.082386 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:44:32.083564 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:44:32.084858 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:44:32.086287 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:44:32.087472 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:44:32.088721 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:44:32.089966 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:44:32.089993 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:44:32.090889 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:44:32.092355 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:44:32.095167 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:44:32.111347 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:44:32.113866 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:44:32.115479 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:44:32.116712 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:44:32.117726 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:44:32.118785 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:44:32.118812 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:44:32.119790 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:44:32.121924 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:44:32.124263 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:44:32.126030 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:44:32.129457 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:44:32.130795 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:44:32.134773 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:44:32.137442 jq[1440]: false Jan 30 13:44:32.137761 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:44:32.144769 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:44:32.147137 extend-filesystems[1441]: Found loop3 Jan 30 13:44:32.147137 extend-filesystems[1441]: Found loop4 Jan 30 13:44:32.147137 extend-filesystems[1441]: Found loop5 Jan 30 13:44:32.147137 extend-filesystems[1441]: Found sr0 Jan 30 13:44:32.147137 extend-filesystems[1441]: Found vda Jan 30 13:44:32.147137 extend-filesystems[1441]: Found vda1 Jan 30 13:44:32.147137 extend-filesystems[1441]: Found vda2 Jan 30 13:44:32.155676 extend-filesystems[1441]: Found vda3 Jan 30 13:44:32.155676 extend-filesystems[1441]: Found usr Jan 30 13:44:32.155676 extend-filesystems[1441]: Found vda4 Jan 30 13:44:32.155676 extend-filesystems[1441]: Found vda6 Jan 30 13:44:32.155676 extend-filesystems[1441]: Found vda7 Jan 30 13:44:32.155676 extend-filesystems[1441]: Found vda9 Jan 30 13:44:32.155676 extend-filesystems[1441]: Checking size of /dev/vda9 Jan 30 13:44:32.148185 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:44:32.151151 dbus-daemon[1439]: [system] SELinux support is enabled Jan 30 13:44:32.171362 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 13:44:32.171389 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1364) Jan 30 13:44:32.171449 extend-filesystems[1441]: Resized partition /dev/vda9 Jan 30 13:44:32.157900 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:44:32.172879 extend-filesystems[1457]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:44:32.162492 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:44:32.170730 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:44:32.171858 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:44:32.176964 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:44:32.179177 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:44:32.184764 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:44:32.187823 jq[1460]: true Jan 30 13:44:32.189280 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:44:32.189513 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:44:32.190340 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:44:32.190554 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:44:32.193354 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:44:32.193559 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:44:32.202189 update_engine[1459]: I20250130 13:44:32.202124 1459 main.cc:92] Flatcar Update Engine starting Jan 30 13:44:32.206196 update_engine[1459]: I20250130 13:44:32.206065 1459 update_check_scheduler.cc:74] Next update check in 10m13s Jan 30 13:44:32.206952 (ntainerd)[1467]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:44:32.210929 jq[1466]: true Jan 30 13:44:32.218211 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 13:44:32.222309 tar[1464]: linux-amd64/LICENSE Jan 30 13:44:32.230664 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:44:32.232223 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:44:32.232253 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:44:32.233766 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:44:32.233787 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:44:32.239320 tar[1464]: linux-amd64/helm Jan 30 13:44:32.239381 extend-filesystems[1457]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:44:32.239381 extend-filesystems[1457]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:44:32.239381 extend-filesystems[1457]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 13:44:32.243760 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:44:32.244811 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Jan 30 13:44:32.246001 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:44:32.246340 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:44:32.259723 systemd-logind[1454]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:44:32.259747 systemd-logind[1454]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:44:32.260111 bash[1492]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:44:32.262777 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:44:32.262821 systemd-logind[1454]: New seat seat0. Jan 30 13:44:32.264302 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:44:32.268507 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:44:32.297093 locksmithd[1493]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:44:32.405651 containerd[1467]: time="2025-01-30T13:44:32.405527786Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:44:32.427737 containerd[1467]: time="2025-01-30T13:44:32.427632722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:44:32.429365 containerd[1467]: time="2025-01-30T13:44:32.429320156Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:44:32.429365 containerd[1467]: time="2025-01-30T13:44:32.429355192Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:44:32.429461 containerd[1467]: time="2025-01-30T13:44:32.429372655Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:44:32.429553 containerd[1467]: time="2025-01-30T13:44:32.429528347Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:44:32.429553 containerd[1467]: time="2025-01-30T13:44:32.429550007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:44:32.429648 containerd[1467]: time="2025-01-30T13:44:32.429614528Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:44:32.429648 containerd[1467]: time="2025-01-30T13:44:32.429645877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:44:32.429877 containerd[1467]: time="2025-01-30T13:44:32.429848457Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:44:32.429877 containerd[1467]: time="2025-01-30T13:44:32.429870588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:44:32.429933 containerd[1467]: time="2025-01-30T13:44:32.429884003Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:44:32.429933 containerd[1467]: time="2025-01-30T13:44:32.429895275Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:44:32.430018 containerd[1467]: time="2025-01-30T13:44:32.430000271Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:44:32.430255 containerd[1467]: time="2025-01-30T13:44:32.430230142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:44:32.430388 containerd[1467]: time="2025-01-30T13:44:32.430362661Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:44:32.430388 containerd[1467]: time="2025-01-30T13:44:32.430380384Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:44:32.430504 containerd[1467]: time="2025-01-30T13:44:32.430480291Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:44:32.430564 containerd[1467]: time="2025-01-30T13:44:32.430548389Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:44:32.436643 containerd[1467]: time="2025-01-30T13:44:32.436609203Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:44:32.436774 containerd[1467]: time="2025-01-30T13:44:32.436726433Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:44:32.437837 containerd[1467]: time="2025-01-30T13:44:32.436749747Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:44:32.437837 containerd[1467]: time="2025-01-30T13:44:32.436882846Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:44:32.437837 containerd[1467]: time="2025-01-30T13:44:32.436899688Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:44:32.437837 containerd[1467]: time="2025-01-30T13:44:32.437060189Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:44:32.437837 containerd[1467]: time="2025-01-30T13:44:32.437280843Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:44:32.437837 containerd[1467]: time="2025-01-30T13:44:32.437381131Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:44:32.437837 containerd[1467]: time="2025-01-30T13:44:32.437396820Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:44:32.437837 containerd[1467]: time="2025-01-30T13:44:32.437409484Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:44:32.437837 containerd[1467]: time="2025-01-30T13:44:32.437423149Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:44:32.437837 containerd[1467]: time="2025-01-30T13:44:32.437436034Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:44:32.437837 containerd[1467]: time="2025-01-30T13:44:32.437449659Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:44:32.437837 containerd[1467]: time="2025-01-30T13:44:32.437464206Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:44:32.437837 containerd[1467]: time="2025-01-30T13:44:32.437478333Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:44:32.437837 containerd[1467]: time="2025-01-30T13:44:32.437497539Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:44:32.438106 containerd[1467]: time="2025-01-30T13:44:32.437510102Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:44:32.438106 containerd[1467]: time="2025-01-30T13:44:32.437520722Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:44:32.438106 containerd[1467]: time="2025-01-30T13:44:32.437540219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:44:32.438106 containerd[1467]: time="2025-01-30T13:44:32.437564134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:44:32.438106 containerd[1467]: time="2025-01-30T13:44:32.437576307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:44:32.438106 containerd[1467]: time="2025-01-30T13:44:32.437588289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:44:32.438106 containerd[1467]: time="2025-01-30T13:44:32.437600161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:44:32.438106 containerd[1467]: time="2025-01-30T13:44:32.437613827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:44:32.438106 containerd[1467]: time="2025-01-30T13:44:32.437639946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:44:32.438106 containerd[1467]: time="2025-01-30T13:44:32.437652680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:44:32.438106 containerd[1467]: time="2025-01-30T13:44:32.437665073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:44:32.438106 containerd[1467]: time="2025-01-30T13:44:32.437685792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:44:32.438106 containerd[1467]: time="2025-01-30T13:44:32.437698295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:44:32.438106 containerd[1467]: time="2025-01-30T13:44:32.437721479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:44:32.438340 containerd[1467]: time="2025-01-30T13:44:32.437734112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:44:32.438340 containerd[1467]: time="2025-01-30T13:44:32.437747878Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:44:32.438340 containerd[1467]: time="2025-01-30T13:44:32.437771092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:44:32.438340 containerd[1467]: time="2025-01-30T13:44:32.437782213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:44:32.438340 containerd[1467]: time="2025-01-30T13:44:32.437792873Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:44:32.439823 containerd[1467]: time="2025-01-30T13:44:32.439801779Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:44:32.440247 containerd[1467]: time="2025-01-30T13:44:32.440227508Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:44:32.440408 containerd[1467]: time="2025-01-30T13:44:32.440394430Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:44:32.440493 containerd[1467]: time="2025-01-30T13:44:32.440478248Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:44:32.440556 containerd[1467]: time="2025-01-30T13:44:32.440528031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:44:32.440633 containerd[1467]: time="2025-01-30T13:44:32.440600437Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:44:32.440689 containerd[1467]: time="2025-01-30T13:44:32.440677521Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:44:32.440744 containerd[1467]: time="2025-01-30T13:44:32.440723838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:44:32.441143 containerd[1467]: time="2025-01-30T13:44:32.441096657Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:44:32.441342 containerd[1467]: time="2025-01-30T13:44:32.441327851Z" level=info msg="Connect containerd service" Jan 30 13:44:32.441436 containerd[1467]: time="2025-01-30T13:44:32.441423450Z" level=info msg="using legacy CRI server" Jan 30 13:44:32.441497 containerd[1467]: time="2025-01-30T13:44:32.441486168Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:44:32.441646 containerd[1467]: time="2025-01-30T13:44:32.441613186Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:44:32.442392 containerd[1467]: time="2025-01-30T13:44:32.442372549Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:44:32.442588 containerd[1467]: time="2025-01-30T13:44:32.442549050Z" level=info msg="Start subscribing containerd event" Jan 30 13:44:32.442953 containerd[1467]: time="2025-01-30T13:44:32.442933571Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:44:32.443054 containerd[1467]: time="2025-01-30T13:44:32.443039791Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:44:32.443157 containerd[1467]: time="2025-01-30T13:44:32.443144747Z" level=info msg="Start recovering state" Jan 30 13:44:32.443302 containerd[1467]: time="2025-01-30T13:44:32.443288607Z" level=info msg="Start event monitor" Jan 30 13:44:32.443375 containerd[1467]: time="2025-01-30T13:44:32.443364059Z" level=info msg="Start snapshots syncer" Jan 30 13:44:32.443474 containerd[1467]: time="2025-01-30T13:44:32.443461601Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:44:32.443597 containerd[1467]: time="2025-01-30T13:44:32.443575325Z" level=info msg="Start streaming server" Jan 30 13:44:32.443837 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:44:32.444150 containerd[1467]: time="2025-01-30T13:44:32.444132339Z" level=info msg="containerd successfully booted in 0.039779s" Jan 30 13:44:32.491947 sshd_keygen[1465]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:44:32.516403 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:44:32.522853 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:44:32.531643 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:44:32.531893 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:44:32.534779 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:44:32.551473 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:44:32.554765 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:44:32.564934 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:44:32.567046 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:44:32.568338 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:44:32.570749 systemd[1]: Started sshd@0-10.0.0.97:22-10.0.0.1:59968.service - OpenSSH per-connection server daemon (10.0.0.1:59968). Jan 30 13:44:32.621064 sshd[1528]: Accepted publickey for core from 10.0.0.1 port 59968 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:44:32.623282 sshd[1528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:32.631335 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:44:32.639820 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:44:32.643130 systemd-logind[1454]: New session 1 of user core. Jan 30 13:44:32.653944 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:44:32.659158 tar[1464]: linux-amd64/README.md Jan 30 13:44:32.664890 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:44:32.670153 (systemd)[1532]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:44:32.676930 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:44:32.770184 systemd[1532]: Queued start job for default target default.target. Jan 30 13:44:32.779986 systemd[1532]: Created slice app.slice - User Application Slice. Jan 30 13:44:32.780017 systemd[1532]: Reached target paths.target - Paths. Jan 30 13:44:32.780032 systemd[1532]: Reached target timers.target - Timers. Jan 30 13:44:32.781556 systemd[1532]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:44:32.792928 systemd[1532]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:44:32.793075 systemd[1532]: Reached target sockets.target - Sockets. Jan 30 13:44:32.793095 systemd[1532]: Reached target basic.target - Basic System. Jan 30 13:44:32.793141 systemd[1532]: Reached target default.target - Main User Target. Jan 30 13:44:32.793178 systemd[1532]: Startup finished in 116ms. Jan 30 13:44:32.793516 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:44:32.796005 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:44:32.857113 systemd[1]: Started sshd@1-10.0.0.97:22-10.0.0.1:57920.service - OpenSSH per-connection server daemon (10.0.0.1:57920). Jan 30 13:44:32.894401 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 57920 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:44:32.895844 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:32.899543 systemd-logind[1454]: New session 2 of user core. Jan 30 13:44:32.906760 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:44:32.960004 sshd[1546]: pam_unix(sshd:session): session closed for user core Jan 30 13:44:32.977234 systemd[1]: sshd@1-10.0.0.97:22-10.0.0.1:57920.service: Deactivated successfully. Jan 30 13:44:32.978864 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:44:32.980336 systemd-logind[1454]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:44:32.988911 systemd[1]: Started sshd@2-10.0.0.97:22-10.0.0.1:57932.service - OpenSSH per-connection server daemon (10.0.0.1:57932). Jan 30 13:44:32.991161 systemd-logind[1454]: Removed session 2. Jan 30 13:44:33.022866 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 57932 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:44:33.024247 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:33.027749 systemd-logind[1454]: New session 3 of user core. Jan 30 13:44:33.035737 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:44:33.089974 sshd[1553]: pam_unix(sshd:session): session closed for user core Jan 30 13:44:33.093782 systemd[1]: sshd@2-10.0.0.97:22-10.0.0.1:57932.service: Deactivated successfully. Jan 30 13:44:33.095502 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:44:33.096115 systemd-logind[1454]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:44:33.096922 systemd-logind[1454]: Removed session 3. Jan 30 13:44:33.354751 systemd-networkd[1398]: eth0: Gained IPv6LL Jan 30 13:44:33.357824 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:44:33.359766 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:44:33.370816 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:44:33.373227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:44:33.375400 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:44:33.393954 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:44:33.394199 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:44:33.395728 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:44:33.396159 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:44:34.042048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:44:34.043688 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:44:34.045006 systemd[1]: Startup finished in 718ms (kernel) + 5.702s (initrd) + 4.052s (userspace) = 10.473s. Jan 30 13:44:34.047364 (kubelet)[1581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:44:34.441398 kubelet[1581]: E0130 13:44:34.441261 1581 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:44:34.445232 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:44:34.445439 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:44:43.101523 systemd[1]: Started sshd@3-10.0.0.97:22-10.0.0.1:42316.service - OpenSSH per-connection server daemon (10.0.0.1:42316). Jan 30 13:44:43.140300 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 42316 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:44:43.141819 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:43.145773 systemd-logind[1454]: New session 4 of user core. Jan 30 13:44:43.159804 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:44:43.213955 sshd[1594]: pam_unix(sshd:session): session closed for user core Jan 30 13:44:43.221256 systemd[1]: sshd@3-10.0.0.97:22-10.0.0.1:42316.service: Deactivated successfully. Jan 30 13:44:43.223011 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:44:43.224320 systemd-logind[1454]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:44:43.225582 systemd[1]: Started sshd@4-10.0.0.97:22-10.0.0.1:42326.service - OpenSSH per-connection server daemon (10.0.0.1:42326). Jan 30 13:44:43.226295 systemd-logind[1454]: Removed session 4. Jan 30 13:44:43.265097 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 42326 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:44:43.266566 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:43.270677 systemd-logind[1454]: New session 5 of user core. Jan 30 13:44:43.284751 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:44:43.334001 sshd[1601]: pam_unix(sshd:session): session closed for user core Jan 30 13:44:43.341284 systemd[1]: sshd@4-10.0.0.97:22-10.0.0.1:42326.service: Deactivated successfully. Jan 30 13:44:43.342975 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:44:43.344516 systemd-logind[1454]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:44:43.345758 systemd[1]: Started sshd@5-10.0.0.97:22-10.0.0.1:42340.service - OpenSSH per-connection server daemon (10.0.0.1:42340). Jan 30 13:44:43.346423 systemd-logind[1454]: Removed session 5. Jan 30 13:44:43.384143 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 42340 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:44:43.385665 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:43.389425 systemd-logind[1454]: New session 6 of user core. Jan 30 13:44:43.397761 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:44:43.451998 sshd[1608]: pam_unix(sshd:session): session closed for user core Jan 30 13:44:43.469445 systemd[1]: sshd@5-10.0.0.97:22-10.0.0.1:42340.service: Deactivated successfully. Jan 30 13:44:43.470975 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:44:43.472347 systemd-logind[1454]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:44:43.473518 systemd[1]: Started sshd@6-10.0.0.97:22-10.0.0.1:42350.service - OpenSSH per-connection server daemon (10.0.0.1:42350). Jan 30 13:44:43.474230 systemd-logind[1454]: Removed session 6. Jan 30 13:44:43.510987 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 42350 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:44:43.512344 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:43.516093 systemd-logind[1454]: New session 7 of user core. Jan 30 13:44:43.525746 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:44:43.659518 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:44:43.659883 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:44:43.675896 sudo[1618]: pam_unix(sudo:session): session closed for user root Jan 30 13:44:43.677565 sshd[1615]: pam_unix(sshd:session): session closed for user core Jan 30 13:44:43.687377 systemd[1]: sshd@6-10.0.0.97:22-10.0.0.1:42350.service: Deactivated successfully. Jan 30 13:44:43.689123 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:44:43.690801 systemd-logind[1454]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:44:43.700909 systemd[1]: Started sshd@7-10.0.0.97:22-10.0.0.1:42356.service - OpenSSH per-connection server daemon (10.0.0.1:42356). Jan 30 13:44:43.701955 systemd-logind[1454]: Removed session 7. Jan 30 13:44:43.735100 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 42356 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:44:43.736576 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:43.740210 systemd-logind[1454]: New session 8 of user core. Jan 30 13:44:43.749759 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:44:43.804513 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:44:43.804893 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:44:43.808532 sudo[1627]: pam_unix(sudo:session): session closed for user root Jan 30 13:44:43.816428 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:44:43.816919 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:44:43.832874 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:44:43.834652 auditctl[1630]: No rules Jan 30 13:44:43.835998 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:44:43.836249 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:44:43.838002 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:44:43.868037 augenrules[1648]: No rules Jan 30 13:44:43.869839 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:44:43.871168 sudo[1626]: pam_unix(sudo:session): session closed for user root Jan 30 13:44:43.873021 sshd[1623]: pam_unix(sshd:session): session closed for user core Jan 30 13:44:43.880617 systemd[1]: sshd@7-10.0.0.97:22-10.0.0.1:42356.service: Deactivated successfully. Jan 30 13:44:43.882494 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:44:43.884270 systemd-logind[1454]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:44:43.891871 systemd[1]: Started sshd@8-10.0.0.97:22-10.0.0.1:42364.service - OpenSSH per-connection server daemon (10.0.0.1:42364). Jan 30 13:44:43.892801 systemd-logind[1454]: Removed session 8. Jan 30 13:44:43.926179 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 42364 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:44:43.928002 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:43.932071 systemd-logind[1454]: New session 9 of user core. Jan 30 13:44:43.949787 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:44:44.002950 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:44:44.003286 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:44:44.275855 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:44:44.276026 (dockerd)[1677]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:44:44.541439 dockerd[1677]: time="2025-01-30T13:44:44.541290417Z" level=info msg="Starting up" Jan 30 13:44:44.542498 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:44:44.550806 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:44:44.791830 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:44:44.795910 (kubelet)[1708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:44:44.805995 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1138891436-merged.mount: Deactivated successfully. Jan 30 13:44:44.839145 kubelet[1708]: E0130 13:44:44.839012 1708 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:44:44.845482 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:44:44.845708 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:44:44.846949 dockerd[1677]: time="2025-01-30T13:44:44.846910044Z" level=info msg="Loading containers: start." Jan 30 13:44:44.949649 kernel: Initializing XFRM netlink socket Jan 30 13:44:45.028861 systemd-networkd[1398]: docker0: Link UP Jan 30 13:44:45.054284 dockerd[1677]: time="2025-01-30T13:44:45.054169906Z" level=info msg="Loading containers: done." Jan 30 13:44:45.070682 dockerd[1677]: time="2025-01-30T13:44:45.070596947Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:44:45.070847 dockerd[1677]: time="2025-01-30T13:44:45.070746517Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:44:45.070916 dockerd[1677]: time="2025-01-30T13:44:45.070891479Z" level=info msg="Daemon has completed initialization" Jan 30 13:44:45.111668 dockerd[1677]: time="2025-01-30T13:44:45.110914973Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:44:45.112000 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:44:45.678541 containerd[1467]: time="2025-01-30T13:44:45.678492571Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 30 13:44:45.803607 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck853447817-merged.mount: Deactivated successfully. Jan 30 13:44:46.365322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1741052931.mount: Deactivated successfully. Jan 30 13:44:47.336477 containerd[1467]: time="2025-01-30T13:44:47.336400257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:47.347452 containerd[1467]: time="2025-01-30T13:44:47.347397447Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=28674824" Jan 30 13:44:47.348857 containerd[1467]: time="2025-01-30T13:44:47.348827458Z" level=info msg="ImageCreate event name:\"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:47.352397 containerd[1467]: time="2025-01-30T13:44:47.352345805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:47.353322 containerd[1467]: time="2025-01-30T13:44:47.353289425Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"28671624\" in 1.674757189s" Jan 30 13:44:47.353376 containerd[1467]: time="2025-01-30T13:44:47.353326003Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\"" Jan 30 13:44:47.354015 containerd[1467]: time="2025-01-30T13:44:47.353989798Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 30 13:44:48.448754 containerd[1467]: time="2025-01-30T13:44:48.448690171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:48.449700 containerd[1467]: time="2025-01-30T13:44:48.449613452Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=24770711" Jan 30 13:44:48.451150 containerd[1467]: time="2025-01-30T13:44:48.451097104Z" level=info msg="ImageCreate event name:\"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:48.453735 containerd[1467]: time="2025-01-30T13:44:48.453693973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:48.454729 containerd[1467]: time="2025-01-30T13:44:48.454684851Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"26258470\" in 1.100665397s" Jan 30 13:44:48.454729 containerd[1467]: time="2025-01-30T13:44:48.454724315Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\"" Jan 30 13:44:48.455201 containerd[1467]: time="2025-01-30T13:44:48.455182354Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 30 13:44:49.707084 containerd[1467]: time="2025-01-30T13:44:49.707018466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:49.707818 containerd[1467]: time="2025-01-30T13:44:49.707742974Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=19169759" Jan 30 13:44:49.709000 containerd[1467]: time="2025-01-30T13:44:49.708963383Z" level=info msg="ImageCreate event name:\"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:49.711863 containerd[1467]: time="2025-01-30T13:44:49.711805632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:49.714638 containerd[1467]: time="2025-01-30T13:44:49.714122656Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"20657536\" in 1.258910366s" Jan 30 13:44:49.714638 containerd[1467]: time="2025-01-30T13:44:49.714162541Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\"" Jan 30 13:44:49.715132 containerd[1467]: time="2025-01-30T13:44:49.715033023Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 30 13:44:50.654923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4244828860.mount: Deactivated successfully. Jan 30 13:44:51.563181 containerd[1467]: time="2025-01-30T13:44:51.563118032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:51.564678 containerd[1467]: time="2025-01-30T13:44:51.564643502Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909466" Jan 30 13:44:51.565865 containerd[1467]: time="2025-01-30T13:44:51.565838393Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:51.567797 containerd[1467]: time="2025-01-30T13:44:51.567762531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:51.568339 containerd[1467]: time="2025-01-30T13:44:51.568285561Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 1.853218925s" Jan 30 13:44:51.568339 containerd[1467]: time="2025-01-30T13:44:51.568330125Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 30 13:44:51.568882 containerd[1467]: time="2025-01-30T13:44:51.568849238Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 30 13:44:52.466149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2302712282.mount: Deactivated successfully. Jan 30 13:44:53.124558 containerd[1467]: time="2025-01-30T13:44:53.124499472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:53.125338 containerd[1467]: time="2025-01-30T13:44:53.125282640Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 30 13:44:53.126604 containerd[1467]: time="2025-01-30T13:44:53.126573160Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:53.129323 containerd[1467]: time="2025-01-30T13:44:53.129279885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:53.130396 containerd[1467]: time="2025-01-30T13:44:53.130368015Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.561486246s" Jan 30 13:44:53.130451 containerd[1467]: time="2025-01-30T13:44:53.130397240Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 30 13:44:53.130989 containerd[1467]: time="2025-01-30T13:44:53.130879374Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 13:44:53.622862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2574879357.mount: Deactivated successfully. Jan 30 13:44:53.628541 containerd[1467]: time="2025-01-30T13:44:53.628501816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:53.629337 containerd[1467]: time="2025-01-30T13:44:53.629262352Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 30 13:44:53.630281 containerd[1467]: time="2025-01-30T13:44:53.630247159Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:53.632425 containerd[1467]: time="2025-01-30T13:44:53.632387812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:53.633121 containerd[1467]: time="2025-01-30T13:44:53.633077145Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 502.170008ms" Jan 30 13:44:53.633121 containerd[1467]: time="2025-01-30T13:44:53.633115878Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 13:44:53.633619 containerd[1467]: time="2025-01-30T13:44:53.633590868Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 30 13:44:54.216079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3622404085.mount: Deactivated successfully. Jan 30 13:44:55.096047 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:44:55.103953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:44:55.259301 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:44:55.264758 (kubelet)[2026]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:44:55.302329 kubelet[2026]: E0130 13:44:55.302233 2026 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:44:55.305763 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:44:55.305953 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:44:56.681411 containerd[1467]: time="2025-01-30T13:44:56.681330860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:56.682383 containerd[1467]: time="2025-01-30T13:44:56.682337768Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Jan 30 13:44:56.683711 containerd[1467]: time="2025-01-30T13:44:56.683672000Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:56.686462 containerd[1467]: time="2025-01-30T13:44:56.686430171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:44:56.687573 containerd[1467]: time="2025-01-30T13:44:56.687514053Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.053894462s" Jan 30 13:44:56.687573 containerd[1467]: time="2025-01-30T13:44:56.687565320Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 30 13:44:58.966525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:44:58.979824 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:44:59.003742 systemd[1]: Reloading requested from client PID 2067 ('systemctl') (unit session-9.scope)... Jan 30 13:44:59.003757 systemd[1]: Reloading... Jan 30 13:44:59.077288 zram_generator::config[2106]: No configuration found. Jan 30 13:44:59.300218 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:44:59.376255 systemd[1]: Reloading finished in 372 ms. Jan 30 13:44:59.427386 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:44:59.430340 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:44:59.430590 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:44:59.432278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:44:59.583339 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:44:59.587787 (kubelet)[2156]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:44:59.619601 kubelet[2156]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:44:59.619601 kubelet[2156]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 13:44:59.619601 kubelet[2156]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:44:59.620000 kubelet[2156]: I0130 13:44:59.619676 2156 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:44:59.895256 kubelet[2156]: I0130 13:44:59.895158 2156 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 13:44:59.895256 kubelet[2156]: I0130 13:44:59.895186 2156 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:44:59.895458 kubelet[2156]: I0130 13:44:59.895421 2156 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 13:44:59.917093 kubelet[2156]: E0130 13:44:59.917057 2156 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:44:59.918795 kubelet[2156]: I0130 13:44:59.918773 2156 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:44:59.924909 kubelet[2156]: E0130 13:44:59.924872 2156 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:44:59.924909 kubelet[2156]: I0130 13:44:59.924900 2156 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:44:59.930026 kubelet[2156]: I0130 13:44:59.929986 2156 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:44:59.930309 kubelet[2156]: I0130 13:44:59.930266 2156 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:44:59.930459 kubelet[2156]: I0130 13:44:59.930300 2156 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:44:59.930459 kubelet[2156]: I0130 13:44:59.930449 2156 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:44:59.930459 kubelet[2156]: I0130 13:44:59.930458 2156 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 13:44:59.931039 kubelet[2156]: I0130 13:44:59.931018 2156 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:44:59.933414 kubelet[2156]: I0130 13:44:59.933391 2156 kubelet.go:446] "Attempting to sync node with API server" Jan 30 13:44:59.933443 kubelet[2156]: I0130 13:44:59.933427 2156 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:44:59.933443 kubelet[2156]: I0130 13:44:59.933442 2156 kubelet.go:352] "Adding apiserver pod source" Jan 30 13:44:59.933485 kubelet[2156]: I0130 13:44:59.933451 2156 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:44:59.935808 kubelet[2156]: I0130 13:44:59.935783 2156 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:44:59.936109 kubelet[2156]: I0130 13:44:59.936098 2156 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:44:59.936154 kubelet[2156]: W0130 13:44:59.936147 2156 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:44:59.937401 kubelet[2156]: W0130 13:44:59.937241 2156 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 30 13:44:59.937401 kubelet[2156]: E0130 13:44:59.937297 2156 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:44:59.937401 kubelet[2156]: W0130 13:44:59.937363 2156 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 30 13:44:59.937401 kubelet[2156]: E0130 13:44:59.937392 2156 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:44:59.938021 kubelet[2156]: I0130 13:44:59.937997 2156 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 13:44:59.938075 kubelet[2156]: I0130 13:44:59.938030 2156 server.go:1287] "Started kubelet" Jan 30 13:44:59.938166 kubelet[2156]: I0130 13:44:59.938145 2156 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:44:59.941330 kubelet[2156]: I0130 13:44:59.940108 2156 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:44:59.941330 kubelet[2156]: I0130 13:44:59.940380 2156 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:44:59.941330 kubelet[2156]: I0130 13:44:59.941185 2156 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:44:59.941657 kubelet[2156]: I0130 13:44:59.941619 2156 server.go:490] "Adding debug handlers to kubelet server" Jan 30 13:44:59.941657 kubelet[2156]: I0130 13:44:59.941645 2156 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:44:59.944028 kubelet[2156]: E0130 13:44:59.942024 2156 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.97:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.97:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f7c585082bb8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:44:59.938012046 +0000 UTC m=+0.346661391,LastTimestamp:2025-01-30 13:44:59.938012046 +0000 UTC m=+0.346661391,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:44:59.944193 kubelet[2156]: I0130 13:44:59.944174 2156 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 13:44:59.944403 kubelet[2156]: E0130 13:44:59.944381 2156 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:44:59.944756 kubelet[2156]: I0130 13:44:59.944738 2156 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:44:59.944860 kubelet[2156]: I0130 13:44:59.944845 2156 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:44:59.945189 kubelet[2156]: E0130 13:44:59.945157 2156 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="200ms" Jan 30 13:44:59.945189 kubelet[2156]: W0130 13:44:59.945158 2156 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 30 13:44:59.945266 kubelet[2156]: E0130 13:44:59.945242 2156 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:44:59.945314 kubelet[2156]: E0130 13:44:59.945297 2156 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:44:59.945433 kubelet[2156]: I0130 13:44:59.945410 2156 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:44:59.945694 kubelet[2156]: I0130 13:44:59.945666 2156 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:44:59.946792 kubelet[2156]: I0130 13:44:59.946767 2156 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:44:59.960113 kubelet[2156]: I0130 13:44:59.960082 2156 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:44:59.961218 kubelet[2156]: I0130 13:44:59.961169 2156 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 13:44:59.961218 kubelet[2156]: I0130 13:44:59.961181 2156 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 13:44:59.961218 kubelet[2156]: I0130 13:44:59.961197 2156 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:44:59.962450 kubelet[2156]: I0130 13:44:59.962431 2156 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:44:59.962450 kubelet[2156]: I0130 13:44:59.962450 2156 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 13:44:59.962537 kubelet[2156]: I0130 13:44:59.962467 2156 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 13:44:59.962537 kubelet[2156]: I0130 13:44:59.962476 2156 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 13:44:59.962537 kubelet[2156]: E0130 13:44:59.962518 2156 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:44:59.964023 kubelet[2156]: W0130 13:44:59.963129 2156 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 30 13:44:59.964023 kubelet[2156]: E0130 13:44:59.963159 2156 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:45:00.044500 kubelet[2156]: E0130 13:45:00.044450 2156 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:45:00.062735 kubelet[2156]: E0130 13:45:00.062714 2156 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:45:00.144798 kubelet[2156]: E0130 13:45:00.144760 2156 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:45:00.146216 kubelet[2156]: E0130 13:45:00.146146 2156 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="400ms" Jan 30 13:45:00.245384 kubelet[2156]: E0130 13:45:00.245365 2156 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:45:00.263641 kubelet[2156]: E0130 13:45:00.263574 2156 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:45:00.284199 kubelet[2156]: I0130 13:45:00.284164 2156 policy_none.go:49] "None policy: Start" Jan 30 13:45:00.284199 kubelet[2156]: I0130 13:45:00.284188 2156 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 13:45:00.284252 kubelet[2156]: I0130 13:45:00.284204 2156 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:45:00.291460 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:45:00.305507 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:45:00.308319 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:45:00.315463 kubelet[2156]: I0130 13:45:00.315422 2156 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:45:00.315678 kubelet[2156]: I0130 13:45:00.315655 2156 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:45:00.315733 kubelet[2156]: I0130 13:45:00.315671 2156 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:45:00.315948 kubelet[2156]: I0130 13:45:00.315909 2156 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:45:00.316935 kubelet[2156]: E0130 13:45:00.316901 2156 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 13:45:00.317021 kubelet[2156]: E0130 13:45:00.316943 2156 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 30 13:45:00.417887 kubelet[2156]: I0130 13:45:00.417775 2156 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:45:00.418128 kubelet[2156]: E0130 13:45:00.418077 2156 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jan 30 13:45:00.546732 kubelet[2156]: E0130 13:45:00.546682 2156 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="800ms" Jan 30 13:45:00.619809 kubelet[2156]: I0130 13:45:00.619784 2156 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:45:00.620201 kubelet[2156]: E0130 13:45:00.620168 2156 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jan 30 13:45:00.672406 systemd[1]: Created slice kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice - libcontainer container kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice. Jan 30 13:45:00.685404 kubelet[2156]: E0130 13:45:00.685372 2156 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:45:00.687682 systemd[1]: Created slice kubepods-burstable-pod25a1a9d21f8d98e81843ac2350fec92a.slice - libcontainer container kubepods-burstable-pod25a1a9d21f8d98e81843ac2350fec92a.slice. Jan 30 13:45:00.689488 kubelet[2156]: E0130 13:45:00.689468 2156 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:45:00.691709 systemd[1]: Created slice kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice - libcontainer container kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice. Jan 30 13:45:00.693199 kubelet[2156]: E0130 13:45:00.693171 2156 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:45:00.748586 kubelet[2156]: I0130 13:45:00.748533 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:45:00.748586 kubelet[2156]: I0130 13:45:00.748587 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:45:00.748586 kubelet[2156]: I0130 13:45:00.748612 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:45:00.748862 kubelet[2156]: I0130 13:45:00.748707 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/25a1a9d21f8d98e81843ac2350fec92a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"25a1a9d21f8d98e81843ac2350fec92a\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:45:00.748862 kubelet[2156]: I0130 13:45:00.748758 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:45:00.748862 kubelet[2156]: I0130 13:45:00.748785 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:45:00.748862 kubelet[2156]: I0130 13:45:00.748809 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:45:00.748862 kubelet[2156]: I0130 13:45:00.748827 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/25a1a9d21f8d98e81843ac2350fec92a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"25a1a9d21f8d98e81843ac2350fec92a\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:45:00.749019 kubelet[2156]: I0130 13:45:00.748849 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/25a1a9d21f8d98e81843ac2350fec92a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"25a1a9d21f8d98e81843ac2350fec92a\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:45:00.892501 kubelet[2156]: W0130 13:45:00.892440 2156 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 30 13:45:00.892501 kubelet[2156]: E0130 13:45:00.892499 2156 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:45:00.986447 kubelet[2156]: E0130 13:45:00.986345 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:00.986874 containerd[1467]: time="2025-01-30T13:45:00.986818438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,}" Jan 30 13:45:00.990343 kubelet[2156]: E0130 13:45:00.990314 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:00.990758 containerd[1467]: time="2025-01-30T13:45:00.990714443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:25a1a9d21f8d98e81843ac2350fec92a,Namespace:kube-system,Attempt:0,}" Jan 30 13:45:00.993947 kubelet[2156]: E0130 13:45:00.993921 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:00.994290 containerd[1467]: time="2025-01-30T13:45:00.994228281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,}" Jan 30 13:45:01.021348 kubelet[2156]: I0130 13:45:01.021311 2156 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:45:01.021693 kubelet[2156]: E0130 13:45:01.021660 2156 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jan 30 13:45:01.201099 kubelet[2156]: W0130 13:45:01.201036 2156 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 30 13:45:01.201099 kubelet[2156]: E0130 13:45:01.201093 2156 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:45:01.287379 kubelet[2156]: W0130 13:45:01.287207 2156 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 30 13:45:01.287379 kubelet[2156]: E0130 13:45:01.287294 2156 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:45:01.312858 kubelet[2156]: W0130 13:45:01.312835 2156 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 30 13:45:01.312919 kubelet[2156]: E0130 13:45:01.312866 2156 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:45:01.347834 kubelet[2156]: E0130 13:45:01.347790 2156 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="1.6s" Jan 30 13:45:01.550989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3325112865.mount: Deactivated successfully. Jan 30 13:45:01.559025 containerd[1467]: time="2025-01-30T13:45:01.558985621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:45:01.560999 containerd[1467]: time="2025-01-30T13:45:01.560964452Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:45:01.562067 containerd[1467]: time="2025-01-30T13:45:01.562033807Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:45:01.562998 containerd[1467]: time="2025-01-30T13:45:01.562965484Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:45:01.564089 containerd[1467]: time="2025-01-30T13:45:01.564057461Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:45:01.564900 containerd[1467]: time="2025-01-30T13:45:01.564869463Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:45:01.565812 containerd[1467]: time="2025-01-30T13:45:01.565769491Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:45:01.567288 containerd[1467]: time="2025-01-30T13:45:01.567257762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:45:01.568996 containerd[1467]: time="2025-01-30T13:45:01.568965233Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 582.076613ms" Jan 30 13:45:01.569644 containerd[1467]: time="2025-01-30T13:45:01.569596066Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 575.297132ms" Jan 30 13:45:01.571982 containerd[1467]: time="2025-01-30T13:45:01.571943848Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 581.156468ms" Jan 30 13:45:01.708727 containerd[1467]: time="2025-01-30T13:45:01.708545085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:45:01.708727 containerd[1467]: time="2025-01-30T13:45:01.708594137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:45:01.708727 containerd[1467]: time="2025-01-30T13:45:01.708610187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:45:01.708926 containerd[1467]: time="2025-01-30T13:45:01.708722478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:45:01.709365 containerd[1467]: time="2025-01-30T13:45:01.708649621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:45:01.709365 containerd[1467]: time="2025-01-30T13:45:01.709288700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:45:01.709365 containerd[1467]: time="2025-01-30T13:45:01.709301123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:45:01.709526 containerd[1467]: time="2025-01-30T13:45:01.709426468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:45:01.709526 containerd[1467]: time="2025-01-30T13:45:01.707612206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:45:01.709526 containerd[1467]: time="2025-01-30T13:45:01.709494375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:45:01.709704 containerd[1467]: time="2025-01-30T13:45:01.709611765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:45:01.709875 containerd[1467]: time="2025-01-30T13:45:01.709842408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:45:01.733777 systemd[1]: Started cri-containerd-2794f25f9b94aa290f915b0c3a17d79fe40efba8ea4644ce2ffcb3fd093623be.scope - libcontainer container 2794f25f9b94aa290f915b0c3a17d79fe40efba8ea4644ce2ffcb3fd093623be. Jan 30 13:45:01.735212 systemd[1]: Started cri-containerd-6b22be3c9a14a65e93363bf42b4ee3e4bad4594d1e23ce53f097f3b5ed8771e2.scope - libcontainer container 6b22be3c9a14a65e93363bf42b4ee3e4bad4594d1e23ce53f097f3b5ed8771e2. Jan 30 13:45:01.736800 systemd[1]: Started cri-containerd-b93affaf05c87c7e679cbb5969ee19a640c1c36c8746ee94c0931dcaac23c8df.scope - libcontainer container b93affaf05c87c7e679cbb5969ee19a640c1c36c8746ee94c0931dcaac23c8df. Jan 30 13:45:01.776054 containerd[1467]: time="2025-01-30T13:45:01.775980356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2794f25f9b94aa290f915b0c3a17d79fe40efba8ea4644ce2ffcb3fd093623be\"" Jan 30 13:45:01.777924 kubelet[2156]: E0130 13:45:01.777714 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:01.780479 containerd[1467]: time="2025-01-30T13:45:01.780417616Z" level=info msg="CreateContainer within sandbox \"2794f25f9b94aa290f915b0c3a17d79fe40efba8ea4644ce2ffcb3fd093623be\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:45:01.781023 containerd[1467]: time="2025-01-30T13:45:01.781002884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b22be3c9a14a65e93363bf42b4ee3e4bad4594d1e23ce53f097f3b5ed8771e2\"" Jan 30 13:45:01.781476 kubelet[2156]: E0130 13:45:01.781445 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:01.782823 containerd[1467]: time="2025-01-30T13:45:01.782734881Z" level=info msg="CreateContainer within sandbox \"6b22be3c9a14a65e93363bf42b4ee3e4bad4594d1e23ce53f097f3b5ed8771e2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:45:01.783772 containerd[1467]: time="2025-01-30T13:45:01.783750014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:25a1a9d21f8d98e81843ac2350fec92a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b93affaf05c87c7e679cbb5969ee19a640c1c36c8746ee94c0931dcaac23c8df\"" Jan 30 13:45:01.784818 kubelet[2156]: E0130 13:45:01.784790 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:01.786351 containerd[1467]: time="2025-01-30T13:45:01.786316837Z" level=info msg="CreateContainer within sandbox \"b93affaf05c87c7e679cbb5969ee19a640c1c36c8746ee94c0931dcaac23c8df\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:45:01.804001 containerd[1467]: time="2025-01-30T13:45:01.803933248Z" level=info msg="CreateContainer within sandbox \"2794f25f9b94aa290f915b0c3a17d79fe40efba8ea4644ce2ffcb3fd093623be\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"49facf777c3a61715587599b16f0128cfe91c0b78c65c92effc95086cf36b862\"" Jan 30 13:45:01.804521 containerd[1467]: time="2025-01-30T13:45:01.804486045Z" level=info msg="StartContainer for \"49facf777c3a61715587599b16f0128cfe91c0b78c65c92effc95086cf36b862\"" Jan 30 13:45:01.807737 containerd[1467]: time="2025-01-30T13:45:01.807702004Z" level=info msg="CreateContainer within sandbox \"6b22be3c9a14a65e93363bf42b4ee3e4bad4594d1e23ce53f097f3b5ed8771e2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d3cc09571d1ffaca6c22f5f3dcbda53ec363f79e8c9f3ab46ddbd553f0b752b9\"" Jan 30 13:45:01.808013 containerd[1467]: time="2025-01-30T13:45:01.807986027Z" level=info msg="StartContainer for \"d3cc09571d1ffaca6c22f5f3dcbda53ec363f79e8c9f3ab46ddbd553f0b752b9\"" Jan 30 13:45:01.815758 containerd[1467]: time="2025-01-30T13:45:01.815710320Z" level=info msg="CreateContainer within sandbox \"b93affaf05c87c7e679cbb5969ee19a640c1c36c8746ee94c0931dcaac23c8df\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"065f40fd7d510071dabf7cbfee708d0cc4abe34f2290eb30e1e6945b774faebf\"" Jan 30 13:45:01.816715 containerd[1467]: time="2025-01-30T13:45:01.816658588Z" level=info msg="StartContainer for \"065f40fd7d510071dabf7cbfee708d0cc4abe34f2290eb30e1e6945b774faebf\"" Jan 30 13:45:01.825725 kubelet[2156]: I0130 13:45:01.824949 2156 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:45:01.825725 kubelet[2156]: E0130 13:45:01.825291 2156 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jan 30 13:45:01.833863 systemd[1]: Started cri-containerd-49facf777c3a61715587599b16f0128cfe91c0b78c65c92effc95086cf36b862.scope - libcontainer container 49facf777c3a61715587599b16f0128cfe91c0b78c65c92effc95086cf36b862. Jan 30 13:45:01.838067 systemd[1]: Started cri-containerd-d3cc09571d1ffaca6c22f5f3dcbda53ec363f79e8c9f3ab46ddbd553f0b752b9.scope - libcontainer container d3cc09571d1ffaca6c22f5f3dcbda53ec363f79e8c9f3ab46ddbd553f0b752b9. Jan 30 13:45:01.842678 systemd[1]: Started cri-containerd-065f40fd7d510071dabf7cbfee708d0cc4abe34f2290eb30e1e6945b774faebf.scope - libcontainer container 065f40fd7d510071dabf7cbfee708d0cc4abe34f2290eb30e1e6945b774faebf. Jan 30 13:45:01.882617 containerd[1467]: time="2025-01-30T13:45:01.882175272Z" level=info msg="StartContainer for \"49facf777c3a61715587599b16f0128cfe91c0b78c65c92effc95086cf36b862\" returns successfully" Jan 30 13:45:01.886832 containerd[1467]: time="2025-01-30T13:45:01.886792439Z" level=info msg="StartContainer for \"065f40fd7d510071dabf7cbfee708d0cc4abe34f2290eb30e1e6945b774faebf\" returns successfully" Jan 30 13:45:01.890360 containerd[1467]: time="2025-01-30T13:45:01.890216359Z" level=info msg="StartContainer for \"d3cc09571d1ffaca6c22f5f3dcbda53ec363f79e8c9f3ab46ddbd553f0b752b9\" returns successfully" Jan 30 13:45:01.974737 kubelet[2156]: E0130 13:45:01.974700 2156 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:45:01.974907 kubelet[2156]: E0130 13:45:01.974821 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:01.976206 kubelet[2156]: E0130 13:45:01.976175 2156 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:45:01.976338 kubelet[2156]: E0130 13:45:01.976316 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:01.978063 kubelet[2156]: E0130 13:45:01.978040 2156 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:45:01.978192 kubelet[2156]: E0130 13:45:01.978144 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:02.935915 kubelet[2156]: I0130 13:45:02.935863 2156 apiserver.go:52] "Watching apiserver" Jan 30 13:45:02.945390 kubelet[2156]: I0130 13:45:02.945353 2156 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:45:02.951077 kubelet[2156]: E0130 13:45:02.951039 2156 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 30 13:45:02.980089 kubelet[2156]: E0130 13:45:02.980067 2156 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:45:02.980176 kubelet[2156]: E0130 13:45:02.980161 2156 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:45:02.980208 kubelet[2156]: E0130 13:45:02.980190 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:02.980288 kubelet[2156]: E0130 13:45:02.980274 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:03.018939 kubelet[2156]: E0130 13:45:03.018903 2156 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 30 13:45:03.427015 kubelet[2156]: I0130 13:45:03.426970 2156 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:45:03.431408 kubelet[2156]: I0130 13:45:03.431374 2156 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 30 13:45:03.444256 kubelet[2156]: I0130 13:45:03.444223 2156 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 13:45:03.447971 kubelet[2156]: E0130 13:45:03.447930 2156 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 30 13:45:03.447971 kubelet[2156]: I0130 13:45:03.447955 2156 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:45:03.449284 kubelet[2156]: E0130 13:45:03.449261 2156 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:45:03.449284 kubelet[2156]: I0130 13:45:03.449281 2156 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 13:45:03.450392 kubelet[2156]: E0130 13:45:03.450366 2156 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 30 13:45:03.981321 kubelet[2156]: I0130 13:45:03.981074 2156 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 13:45:03.982084 kubelet[2156]: I0130 13:45:03.981800 2156 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 13:45:03.985777 kubelet[2156]: E0130 13:45:03.985754 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:03.986830 kubelet[2156]: E0130 13:45:03.986803 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:04.522546 systemd[1]: Reloading requested from client PID 2430 ('systemctl') (unit session-9.scope)... Jan 30 13:45:04.522565 systemd[1]: Reloading... Jan 30 13:45:04.603658 zram_generator::config[2472]: No configuration found. Jan 30 13:45:04.710609 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:45:04.799271 systemd[1]: Reloading finished in 276 ms. Jan 30 13:45:04.840929 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:45:04.867046 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:45:04.867342 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:45:04.874824 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:45:05.026188 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:45:05.030755 (kubelet)[2514]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:45:05.068113 kubelet[2514]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:45:05.068113 kubelet[2514]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 13:45:05.068113 kubelet[2514]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:45:05.068113 kubelet[2514]: I0130 13:45:05.068084 2514 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:45:05.075455 kubelet[2514]: I0130 13:45:05.075413 2514 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 13:45:05.075455 kubelet[2514]: I0130 13:45:05.075436 2514 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:45:05.075694 kubelet[2514]: I0130 13:45:05.075672 2514 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 13:45:05.076755 kubelet[2514]: I0130 13:45:05.076732 2514 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:45:05.078839 kubelet[2514]: I0130 13:45:05.078812 2514 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:45:05.082063 kubelet[2514]: E0130 13:45:05.082019 2514 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:45:05.082063 kubelet[2514]: I0130 13:45:05.082056 2514 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:45:05.086848 kubelet[2514]: I0130 13:45:05.086796 2514 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:45:05.087099 kubelet[2514]: I0130 13:45:05.087057 2514 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:45:05.087273 kubelet[2514]: I0130 13:45:05.087092 2514 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:45:05.087358 kubelet[2514]: I0130 13:45:05.087274 2514 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:45:05.087358 kubelet[2514]: I0130 13:45:05.087283 2514 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 13:45:05.087358 kubelet[2514]: I0130 13:45:05.087328 2514 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:45:05.087537 kubelet[2514]: I0130 13:45:05.087488 2514 kubelet.go:446] "Attempting to sync node with API server" Jan 30 13:45:05.087537 kubelet[2514]: I0130 13:45:05.087504 2514 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:45:05.087537 kubelet[2514]: I0130 13:45:05.087520 2514 kubelet.go:352] "Adding apiserver pod source" Jan 30 13:45:05.087537 kubelet[2514]: I0130 13:45:05.087530 2514 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:45:05.089110 kubelet[2514]: I0130 13:45:05.088971 2514 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:45:05.089506 kubelet[2514]: I0130 13:45:05.089480 2514 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:45:05.089985 kubelet[2514]: I0130 13:45:05.089960 2514 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 13:45:05.090028 kubelet[2514]: I0130 13:45:05.090000 2514 server.go:1287] "Started kubelet" Jan 30 13:45:05.092134 kubelet[2514]: I0130 13:45:05.091871 2514 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:45:05.092234 kubelet[2514]: I0130 13:45:05.091868 2514 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:45:05.092532 kubelet[2514]: I0130 13:45:05.092518 2514 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:45:05.092667 kubelet[2514]: I0130 13:45:05.092642 2514 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:45:05.094268 kubelet[2514]: I0130 13:45:05.094235 2514 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:45:05.094744 kubelet[2514]: I0130 13:45:05.094719 2514 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 13:45:05.094976 kubelet[2514]: E0130 13:45:05.094951 2514 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:45:05.095330 kubelet[2514]: I0130 13:45:05.095308 2514 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:45:05.096841 kubelet[2514]: I0130 13:45:05.096813 2514 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:45:05.098573 kubelet[2514]: I0130 13:45:05.098545 2514 server.go:490] "Adding debug handlers to kubelet server" Jan 30 13:45:05.103569 kubelet[2514]: I0130 13:45:05.102778 2514 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:45:05.103569 kubelet[2514]: I0130 13:45:05.102949 2514 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:45:05.104705 kubelet[2514]: E0130 13:45:05.104616 2514 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:45:05.105163 kubelet[2514]: I0130 13:45:05.105130 2514 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:45:05.105471 kubelet[2514]: I0130 13:45:05.105439 2514 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:45:05.106780 kubelet[2514]: I0130 13:45:05.106744 2514 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:45:05.106780 kubelet[2514]: I0130 13:45:05.106774 2514 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 13:45:05.106929 kubelet[2514]: I0130 13:45:05.106804 2514 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 13:45:05.106929 kubelet[2514]: I0130 13:45:05.106814 2514 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 13:45:05.106929 kubelet[2514]: E0130 13:45:05.106866 2514 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:45:05.146505 kubelet[2514]: I0130 13:45:05.146479 2514 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 13:45:05.146505 kubelet[2514]: I0130 13:45:05.146496 2514 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 13:45:05.146505 kubelet[2514]: I0130 13:45:05.146515 2514 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:45:05.146706 kubelet[2514]: I0130 13:45:05.146689 2514 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:45:05.146731 kubelet[2514]: I0130 13:45:05.146704 2514 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:45:05.146731 kubelet[2514]: I0130 13:45:05.146722 2514 policy_none.go:49] "None policy: Start" Jan 30 13:45:05.146798 kubelet[2514]: I0130 13:45:05.146732 2514 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 13:45:05.146798 kubelet[2514]: I0130 13:45:05.146744 2514 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:45:05.146853 kubelet[2514]: I0130 13:45:05.146840 2514 state_mem.go:75] "Updated machine memory state" Jan 30 13:45:05.150828 kubelet[2514]: I0130 13:45:05.150791 2514 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:45:05.151006 kubelet[2514]: I0130 13:45:05.150985 2514 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:45:05.151037 kubelet[2514]: I0130 13:45:05.151004 2514 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:45:05.151406 kubelet[2514]: I0130 13:45:05.151208 2514 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:45:05.152028 kubelet[2514]: E0130 13:45:05.151966 2514 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 13:45:05.208170 kubelet[2514]: I0130 13:45:05.208120 2514 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 13:45:05.208330 kubelet[2514]: I0130 13:45:05.208223 2514 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:45:05.208330 kubelet[2514]: I0130 13:45:05.208127 2514 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 13:45:05.213764 kubelet[2514]: E0130 13:45:05.213729 2514 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 30 13:45:05.213822 kubelet[2514]: E0130 13:45:05.213774 2514 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 13:45:05.256023 kubelet[2514]: I0130 13:45:05.256003 2514 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:45:05.260845 kubelet[2514]: I0130 13:45:05.260826 2514 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Jan 30 13:45:05.260904 kubelet[2514]: I0130 13:45:05.260885 2514 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 30 13:45:05.297885 kubelet[2514]: I0130 13:45:05.297839 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/25a1a9d21f8d98e81843ac2350fec92a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"25a1a9d21f8d98e81843ac2350fec92a\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:45:05.297885 kubelet[2514]: I0130 13:45:05.297881 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:45:05.298015 kubelet[2514]: I0130 13:45:05.297902 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/25a1a9d21f8d98e81843ac2350fec92a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"25a1a9d21f8d98e81843ac2350fec92a\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:45:05.298015 kubelet[2514]: I0130 13:45:05.297920 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/25a1a9d21f8d98e81843ac2350fec92a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"25a1a9d21f8d98e81843ac2350fec92a\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:45:05.298015 kubelet[2514]: I0130 13:45:05.297969 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:45:05.298015 kubelet[2514]: I0130 13:45:05.297985 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:45:05.298015 kubelet[2514]: I0130 13:45:05.298000 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:45:05.298137 kubelet[2514]: I0130 13:45:05.298015 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:45:05.298137 kubelet[2514]: I0130 13:45:05.298031 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:45:05.512950 kubelet[2514]: E0130 13:45:05.512854 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:05.514036 kubelet[2514]: E0130 13:45:05.514008 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:05.514081 kubelet[2514]: E0130 13:45:05.514016 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:05.524313 sudo[2554]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 13:45:05.524660 sudo[2554]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 13:45:05.983692 sudo[2554]: pam_unix(sudo:session): session closed for user root Jan 30 13:45:06.089961 kubelet[2514]: I0130 13:45:06.089920 2514 apiserver.go:52] "Watching apiserver" Jan 30 13:45:06.096016 kubelet[2514]: I0130 13:45:06.095991 2514 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:45:06.122492 kubelet[2514]: I0130 13:45:06.122463 2514 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 13:45:06.122774 kubelet[2514]: I0130 13:45:06.122656 2514 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 13:45:06.122836 kubelet[2514]: E0130 13:45:06.122799 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:06.134649 kubelet[2514]: E0130 13:45:06.128953 2514 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 30 13:45:06.134649 kubelet[2514]: E0130 13:45:06.129081 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:06.134649 kubelet[2514]: E0130 13:45:06.129775 2514 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 13:45:06.134649 kubelet[2514]: E0130 13:45:06.129870 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:06.139223 kubelet[2514]: I0130 13:45:06.139149 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.13912198 podStartE2EDuration="3.13912198s" podCreationTimestamp="2025-01-30 13:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:45:06.138924512 +0000 UTC m=+1.104322862" watchObservedRunningTime="2025-01-30 13:45:06.13912198 +0000 UTC m=+1.104520330" Jan 30 13:45:06.151004 kubelet[2514]: I0130 13:45:06.150947 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.150929408 podStartE2EDuration="3.150929408s" podCreationTimestamp="2025-01-30 13:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:45:06.144369791 +0000 UTC m=+1.109768141" watchObservedRunningTime="2025-01-30 13:45:06.150929408 +0000 UTC m=+1.116327748" Jan 30 13:45:06.159087 kubelet[2514]: I0130 13:45:06.159025 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.159003719 podStartE2EDuration="1.159003719s" podCreationTimestamp="2025-01-30 13:45:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:45:06.151132266 +0000 UTC m=+1.116530606" watchObservedRunningTime="2025-01-30 13:45:06.159003719 +0000 UTC m=+1.124402069" Jan 30 13:45:07.124209 kubelet[2514]: E0130 13:45:07.124169 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:07.124611 kubelet[2514]: E0130 13:45:07.124370 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:07.365351 sudo[1659]: pam_unix(sudo:session): session closed for user root Jan 30 13:45:07.367367 sshd[1656]: pam_unix(sshd:session): session closed for user core Jan 30 13:45:07.371342 systemd[1]: sshd@8-10.0.0.97:22-10.0.0.1:42364.service: Deactivated successfully. Jan 30 13:45:07.373143 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:45:07.373355 systemd[1]: session-9.scope: Consumed 4.253s CPU time, 157.7M memory peak, 0B memory swap peak. Jan 30 13:45:07.373978 systemd-logind[1454]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:45:07.374794 systemd-logind[1454]: Removed session 9. Jan 30 13:45:09.916814 kubelet[2514]: E0130 13:45:09.916778 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:10.885440 kubelet[2514]: I0130 13:45:10.885407 2514 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:45:10.885762 containerd[1467]: time="2025-01-30T13:45:10.885721280Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:45:10.886429 kubelet[2514]: I0130 13:45:10.885943 2514 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:45:11.440962 kubelet[2514]: E0130 13:45:11.440929 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:11.933768 kubelet[2514]: I0130 13:45:11.933608 2514 status_manager.go:890] "Failed to get status for pod" podUID="7a6d6f00-8111-4a62-955c-514db52a4eb8" pod="kube-system/kube-proxy-b85zs" err="pods \"kube-proxy-b85zs\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Jan 30 13:45:11.938818 systemd[1]: Created slice kubepods-besteffort-pod7a6d6f00_8111_4a62_955c_514db52a4eb8.slice - libcontainer container kubepods-besteffort-pod7a6d6f00_8111_4a62_955c_514db52a4eb8.slice. Jan 30 13:45:11.957749 systemd[1]: Created slice kubepods-burstable-podf5fa7c00_c1b0_4cb2_8209_3199189f9081.slice - libcontainer container kubepods-burstable-podf5fa7c00_c1b0_4cb2_8209_3199189f9081.slice. Jan 30 13:45:12.043968 kubelet[2514]: I0130 13:45:12.043907 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf6h4\" (UniqueName: \"kubernetes.io/projected/7a6d6f00-8111-4a62-955c-514db52a4eb8-kube-api-access-jf6h4\") pod \"kube-proxy-b85zs\" (UID: \"7a6d6f00-8111-4a62-955c-514db52a4eb8\") " pod="kube-system/kube-proxy-b85zs" Jan 30 13:45:12.043968 kubelet[2514]: I0130 13:45:12.043956 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-bpf-maps\") pod \"cilium-vddvs\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " pod="kube-system/cilium-vddvs" Jan 30 13:45:12.043968 kubelet[2514]: I0130 13:45:12.043973 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-cilium-cgroup\") pod \"cilium-vddvs\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " pod="kube-system/cilium-vddvs" Jan 30 13:45:12.044147 kubelet[2514]: I0130 13:45:12.043988 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-host-proc-sys-net\") pod \"cilium-vddvs\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " pod="kube-system/cilium-vddvs" Jan 30 13:45:12.044147 kubelet[2514]: I0130 13:45:12.044004 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p2hk\" (UniqueName: \"kubernetes.io/projected/f5fa7c00-c1b0-4cb2-8209-3199189f9081-kube-api-access-8p2hk\") pod \"cilium-vddvs\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " pod="kube-system/cilium-vddvs" Jan 30 13:45:12.044147 kubelet[2514]: I0130 13:45:12.044017 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-etc-cni-netd\") pod \"cilium-vddvs\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " pod="kube-system/cilium-vddvs" Jan 30 13:45:12.044147 kubelet[2514]: I0130 13:45:12.044040 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7a6d6f00-8111-4a62-955c-514db52a4eb8-kube-proxy\") pod \"kube-proxy-b85zs\" (UID: \"7a6d6f00-8111-4a62-955c-514db52a4eb8\") " pod="kube-system/kube-proxy-b85zs" Jan 30 13:45:12.044147 kubelet[2514]: I0130 13:45:12.044053 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a6d6f00-8111-4a62-955c-514db52a4eb8-xtables-lock\") pod \"kube-proxy-b85zs\" (UID: \"7a6d6f00-8111-4a62-955c-514db52a4eb8\") " pod="kube-system/kube-proxy-b85zs" Jan 30 13:45:12.044147 kubelet[2514]: I0130 13:45:12.044065 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-lib-modules\") pod \"cilium-vddvs\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " pod="kube-system/cilium-vddvs" Jan 30 13:45:12.044289 kubelet[2514]: I0130 13:45:12.044078 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-xtables-lock\") pod \"cilium-vddvs\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " pod="kube-system/cilium-vddvs" Jan 30 13:45:12.044289 kubelet[2514]: I0130 13:45:12.044090 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f5fa7c00-c1b0-4cb2-8209-3199189f9081-clustermesh-secrets\") pod \"cilium-vddvs\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " pod="kube-system/cilium-vddvs" Jan 30 13:45:12.044289 kubelet[2514]: I0130 13:45:12.044104 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f5fa7c00-c1b0-4cb2-8209-3199189f9081-cilium-config-path\") pod \"cilium-vddvs\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " pod="kube-system/cilium-vddvs" Jan 30 13:45:12.044289 kubelet[2514]: I0130 13:45:12.044120 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-cni-path\") pod \"cilium-vddvs\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " pod="kube-system/cilium-vddvs" Jan 30 13:45:12.044289 kubelet[2514]: I0130 13:45:12.044132 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f5fa7c00-c1b0-4cb2-8209-3199189f9081-hubble-tls\") pod \"cilium-vddvs\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " pod="kube-system/cilium-vddvs" Jan 30 13:45:12.044289 kubelet[2514]: I0130 13:45:12.044146 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-host-proc-sys-kernel\") pod \"cilium-vddvs\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " pod="kube-system/cilium-vddvs" Jan 30 13:45:12.044421 kubelet[2514]: I0130 13:45:12.044159 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a6d6f00-8111-4a62-955c-514db52a4eb8-lib-modules\") pod \"kube-proxy-b85zs\" (UID: \"7a6d6f00-8111-4a62-955c-514db52a4eb8\") " pod="kube-system/kube-proxy-b85zs" Jan 30 13:45:12.044421 kubelet[2514]: I0130 13:45:12.044173 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-cilium-run\") pod \"cilium-vddvs\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " pod="kube-system/cilium-vddvs" Jan 30 13:45:12.044421 kubelet[2514]: I0130 13:45:12.044187 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-hostproc\") pod \"cilium-vddvs\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " pod="kube-system/cilium-vddvs" Jan 30 13:45:12.064928 systemd[1]: Created slice kubepods-besteffort-podd2e2fe91_782d_4dc4_b7ed_9b597c17d300.slice - libcontainer container kubepods-besteffort-podd2e2fe91_782d_4dc4_b7ed_9b597c17d300.slice. Jan 30 13:45:12.130830 kubelet[2514]: E0130 13:45:12.130792 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:12.144423 kubelet[2514]: I0130 13:45:12.144368 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2e2fe91-782d-4dc4-b7ed-9b597c17d300-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-lzk44\" (UID: \"d2e2fe91-782d-4dc4-b7ed-9b597c17d300\") " pod="kube-system/cilium-operator-6c4d7847fc-lzk44" Jan 30 13:45:12.144423 kubelet[2514]: I0130 13:45:12.144408 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8wd9\" (UniqueName: \"kubernetes.io/projected/d2e2fe91-782d-4dc4-b7ed-9b597c17d300-kube-api-access-m8wd9\") pod \"cilium-operator-6c4d7847fc-lzk44\" (UID: \"d2e2fe91-782d-4dc4-b7ed-9b597c17d300\") " pod="kube-system/cilium-operator-6c4d7847fc-lzk44" Jan 30 13:45:12.372751 kubelet[2514]: E0130 13:45:12.372712 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:12.373432 containerd[1467]: time="2025-01-30T13:45:12.373383018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lzk44,Uid:d2e2fe91-782d-4dc4-b7ed-9b597c17d300,Namespace:kube-system,Attempt:0,}" Jan 30 13:45:12.412066 containerd[1467]: time="2025-01-30T13:45:12.411876728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:45:12.412066 containerd[1467]: time="2025-01-30T13:45:12.411946771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:45:12.412066 containerd[1467]: time="2025-01-30T13:45:12.411959396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:45:12.412280 containerd[1467]: time="2025-01-30T13:45:12.412058815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:45:12.434857 systemd[1]: Started cri-containerd-f8ea1cb263101cb5d3bd439525284266462eda133def1f954d27ba3caa628dfb.scope - libcontainer container f8ea1cb263101cb5d3bd439525284266462eda133def1f954d27ba3caa628dfb. Jan 30 13:45:12.471102 containerd[1467]: time="2025-01-30T13:45:12.471057259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lzk44,Uid:d2e2fe91-782d-4dc4-b7ed-9b597c17d300,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8ea1cb263101cb5d3bd439525284266462eda133def1f954d27ba3caa628dfb\"" Jan 30 13:45:12.471811 kubelet[2514]: E0130 13:45:12.471783 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:12.472841 containerd[1467]: time="2025-01-30T13:45:12.472818693Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 13:45:12.552367 kubelet[2514]: E0130 13:45:12.552330 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:12.552856 containerd[1467]: time="2025-01-30T13:45:12.552808126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b85zs,Uid:7a6d6f00-8111-4a62-955c-514db52a4eb8,Namespace:kube-system,Attempt:0,}" Jan 30 13:45:12.561133 kubelet[2514]: E0130 13:45:12.561091 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:12.561710 containerd[1467]: time="2025-01-30T13:45:12.561670648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vddvs,Uid:f5fa7c00-c1b0-4cb2-8209-3199189f9081,Namespace:kube-system,Attempt:0,}" Jan 30 13:45:12.581733 containerd[1467]: time="2025-01-30T13:45:12.580833288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:45:12.581733 containerd[1467]: time="2025-01-30T13:45:12.581484747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:45:12.581733 containerd[1467]: time="2025-01-30T13:45:12.581504055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:45:12.581733 containerd[1467]: time="2025-01-30T13:45:12.581605247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:45:12.588766 containerd[1467]: time="2025-01-30T13:45:12.588612386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:45:12.588766 containerd[1467]: time="2025-01-30T13:45:12.588711385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:45:12.588766 containerd[1467]: time="2025-01-30T13:45:12.588731503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:45:12.589005 containerd[1467]: time="2025-01-30T13:45:12.588834539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:45:12.602798 systemd[1]: Started cri-containerd-aaf32544b5210a304bd845c9d32d3e2471d2fb49002a3bd651884201910a2c3d.scope - libcontainer container aaf32544b5210a304bd845c9d32d3e2471d2fb49002a3bd651884201910a2c3d. Jan 30 13:45:12.606364 systemd[1]: Started cri-containerd-cc41cd2efe03f28d4e5c2610c68c0f53a58c9b84e7aaa7762b6d95f24da13d08.scope - libcontainer container cc41cd2efe03f28d4e5c2610c68c0f53a58c9b84e7aaa7762b6d95f24da13d08. Jan 30 13:45:12.629962 containerd[1467]: time="2025-01-30T13:45:12.629838829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vddvs,Uid:f5fa7c00-c1b0-4cb2-8209-3199189f9081,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc41cd2efe03f28d4e5c2610c68c0f53a58c9b84e7aaa7762b6d95f24da13d08\"" Jan 30 13:45:12.630688 kubelet[2514]: E0130 13:45:12.630567 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:12.636975 containerd[1467]: time="2025-01-30T13:45:12.636928916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b85zs,Uid:7a6d6f00-8111-4a62-955c-514db52a4eb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"aaf32544b5210a304bd845c9d32d3e2471d2fb49002a3bd651884201910a2c3d\"" Jan 30 13:45:12.637783 kubelet[2514]: E0130 13:45:12.637755 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:12.639957 containerd[1467]: time="2025-01-30T13:45:12.639922395Z" level=info msg="CreateContainer within sandbox \"aaf32544b5210a304bd845c9d32d3e2471d2fb49002a3bd651884201910a2c3d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:45:12.662922 containerd[1467]: time="2025-01-30T13:45:12.662861534Z" level=info msg="CreateContainer within sandbox \"aaf32544b5210a304bd845c9d32d3e2471d2fb49002a3bd651884201910a2c3d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a4c29183688241d804df0559b8758f3f31025ecebdc7ae1944a207fe76f33026\"" Jan 30 13:45:12.663601 containerd[1467]: time="2025-01-30T13:45:12.663542569Z" level=info msg="StartContainer for \"a4c29183688241d804df0559b8758f3f31025ecebdc7ae1944a207fe76f33026\"" Jan 30 13:45:12.694768 systemd[1]: Started cri-containerd-a4c29183688241d804df0559b8758f3f31025ecebdc7ae1944a207fe76f33026.scope - libcontainer container a4c29183688241d804df0559b8758f3f31025ecebdc7ae1944a207fe76f33026. Jan 30 13:45:12.774589 containerd[1467]: time="2025-01-30T13:45:12.774524555Z" level=info msg="StartContainer for \"a4c29183688241d804df0559b8758f3f31025ecebdc7ae1944a207fe76f33026\" returns successfully" Jan 30 13:45:13.133745 kubelet[2514]: E0130 13:45:13.133661 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:13.142594 kubelet[2514]: I0130 13:45:13.142532 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b85zs" podStartSLOduration=2.142514949 podStartE2EDuration="2.142514949s" podCreationTimestamp="2025-01-30 13:45:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:45:13.142224176 +0000 UTC m=+8.107622526" watchObservedRunningTime="2025-01-30 13:45:13.142514949 +0000 UTC m=+8.107913299" Jan 30 13:45:14.297927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3987213257.mount: Deactivated successfully. Jan 30 13:45:14.566610 containerd[1467]: time="2025-01-30T13:45:14.566494181Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:14.567340 containerd[1467]: time="2025-01-30T13:45:14.567275065Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 13:45:14.568464 containerd[1467]: time="2025-01-30T13:45:14.568424449Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:14.569672 containerd[1467]: time="2025-01-30T13:45:14.569641642Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.09678088s" Jan 30 13:45:14.569738 containerd[1467]: time="2025-01-30T13:45:14.569673693Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 13:45:14.570641 containerd[1467]: time="2025-01-30T13:45:14.570491787Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 13:45:14.571315 containerd[1467]: time="2025-01-30T13:45:14.571292579Z" level=info msg="CreateContainer within sandbox \"f8ea1cb263101cb5d3bd439525284266462eda133def1f954d27ba3caa628dfb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 13:45:14.603638 containerd[1467]: time="2025-01-30T13:45:14.603573109Z" level=info msg="CreateContainer within sandbox \"f8ea1cb263101cb5d3bd439525284266462eda133def1f954d27ba3caa628dfb\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f021f4fe0540263c6bc3750da52eebe6f74c9d5f7652023f8a786bf311055960\"" Jan 30 13:45:14.604548 containerd[1467]: time="2025-01-30T13:45:14.603957660Z" level=info msg="StartContainer for \"f021f4fe0540263c6bc3750da52eebe6f74c9d5f7652023f8a786bf311055960\"" Jan 30 13:45:14.645784 systemd[1]: Started cri-containerd-f021f4fe0540263c6bc3750da52eebe6f74c9d5f7652023f8a786bf311055960.scope - libcontainer container f021f4fe0540263c6bc3750da52eebe6f74c9d5f7652023f8a786bf311055960. Jan 30 13:45:14.653898 kubelet[2514]: E0130 13:45:14.653870 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:14.672954 containerd[1467]: time="2025-01-30T13:45:14.672904373Z" level=info msg="StartContainer for \"f021f4fe0540263c6bc3750da52eebe6f74c9d5f7652023f8a786bf311055960\" returns successfully" Jan 30 13:45:15.139113 kubelet[2514]: E0130 13:45:15.139065 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:15.139730 kubelet[2514]: E0130 13:45:15.139691 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:15.148960 kubelet[2514]: I0130 13:45:15.148884 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-lzk44" podStartSLOduration=1.050917157 podStartE2EDuration="3.148864174s" podCreationTimestamp="2025-01-30 13:45:12 +0000 UTC" firstStartedPulling="2025-01-30 13:45:12.472408341 +0000 UTC m=+7.437806691" lastFinishedPulling="2025-01-30 13:45:14.570355358 +0000 UTC m=+9.535753708" observedRunningTime="2025-01-30 13:45:15.148557753 +0000 UTC m=+10.113956113" watchObservedRunningTime="2025-01-30 13:45:15.148864174 +0000 UTC m=+10.114262524" Jan 30 13:45:16.140442 kubelet[2514]: E0130 13:45:16.140399 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:17.574788 update_engine[1459]: I20250130 13:45:17.574697 1459 update_attempter.cc:509] Updating boot flags... Jan 30 13:45:17.610766 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2951) Jan 30 13:45:17.656775 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2955) Jan 30 13:45:17.685653 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2955) Jan 30 13:45:19.921713 kubelet[2514]: E0130 13:45:19.921481 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:20.146367 kubelet[2514]: E0130 13:45:20.146331 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:21.842011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3188607610.mount: Deactivated successfully. Jan 30 13:45:23.927799 containerd[1467]: time="2025-01-30T13:45:23.927739307Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:23.928513 containerd[1467]: time="2025-01-30T13:45:23.928456091Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 13:45:23.929593 containerd[1467]: time="2025-01-30T13:45:23.929558434Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:23.931175 containerd[1467]: time="2025-01-30T13:45:23.931145021Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.360620992s" Jan 30 13:45:23.931218 containerd[1467]: time="2025-01-30T13:45:23.931172463Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 13:45:23.934149 containerd[1467]: time="2025-01-30T13:45:23.934099633Z" level=info msg="CreateContainer within sandbox \"cc41cd2efe03f28d4e5c2610c68c0f53a58c9b84e7aaa7762b6d95f24da13d08\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:45:23.945116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount397989152.mount: Deactivated successfully. Jan 30 13:45:23.946221 containerd[1467]: time="2025-01-30T13:45:23.946185625Z" level=info msg="CreateContainer within sandbox \"cc41cd2efe03f28d4e5c2610c68c0f53a58c9b84e7aaa7762b6d95f24da13d08\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9460849ffcb415b6ef1d48efc9727239d049580c2f0ea02685daa5332d5d9837\"" Jan 30 13:45:23.946796 containerd[1467]: time="2025-01-30T13:45:23.946594527Z" level=info msg="StartContainer for \"9460849ffcb415b6ef1d48efc9727239d049580c2f0ea02685daa5332d5d9837\"" Jan 30 13:45:23.977755 systemd[1]: Started cri-containerd-9460849ffcb415b6ef1d48efc9727239d049580c2f0ea02685daa5332d5d9837.scope - libcontainer container 9460849ffcb415b6ef1d48efc9727239d049580c2f0ea02685daa5332d5d9837. Jan 30 13:45:24.002907 containerd[1467]: time="2025-01-30T13:45:24.002862252Z" level=info msg="StartContainer for \"9460849ffcb415b6ef1d48efc9727239d049580c2f0ea02685daa5332d5d9837\" returns successfully" Jan 30 13:45:24.012687 systemd[1]: cri-containerd-9460849ffcb415b6ef1d48efc9727239d049580c2f0ea02685daa5332d5d9837.scope: Deactivated successfully. Jan 30 13:45:24.154110 kubelet[2514]: E0130 13:45:24.154075 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:24.171171 containerd[1467]: time="2025-01-30T13:45:24.171099190Z" level=info msg="shim disconnected" id=9460849ffcb415b6ef1d48efc9727239d049580c2f0ea02685daa5332d5d9837 namespace=k8s.io Jan 30 13:45:24.171171 containerd[1467]: time="2025-01-30T13:45:24.171164664Z" level=warning msg="cleaning up after shim disconnected" id=9460849ffcb415b6ef1d48efc9727239d049580c2f0ea02685daa5332d5d9837 namespace=k8s.io Jan 30 13:45:24.171171 containerd[1467]: time="2025-01-30T13:45:24.171173781Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:45:24.942591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9460849ffcb415b6ef1d48efc9727239d049580c2f0ea02685daa5332d5d9837-rootfs.mount: Deactivated successfully. Jan 30 13:45:25.155979 kubelet[2514]: E0130 13:45:25.155935 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:25.158764 containerd[1467]: time="2025-01-30T13:45:25.158729195Z" level=info msg="CreateContainer within sandbox \"cc41cd2efe03f28d4e5c2610c68c0f53a58c9b84e7aaa7762b6d95f24da13d08\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:45:25.174845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3269048145.mount: Deactivated successfully. Jan 30 13:45:25.177117 containerd[1467]: time="2025-01-30T13:45:25.177074069Z" level=info msg="CreateContainer within sandbox \"cc41cd2efe03f28d4e5c2610c68c0f53a58c9b84e7aaa7762b6d95f24da13d08\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"99b3ef28274a5d5e57641a840c42a58ece5dfb81fc40afcce6926c147cc2b35d\"" Jan 30 13:45:25.177839 containerd[1467]: time="2025-01-30T13:45:25.177576397Z" level=info msg="StartContainer for \"99b3ef28274a5d5e57641a840c42a58ece5dfb81fc40afcce6926c147cc2b35d\"" Jan 30 13:45:25.207770 systemd[1]: Started cri-containerd-99b3ef28274a5d5e57641a840c42a58ece5dfb81fc40afcce6926c147cc2b35d.scope - libcontainer container 99b3ef28274a5d5e57641a840c42a58ece5dfb81fc40afcce6926c147cc2b35d. Jan 30 13:45:25.232886 containerd[1467]: time="2025-01-30T13:45:25.232846944Z" level=info msg="StartContainer for \"99b3ef28274a5d5e57641a840c42a58ece5dfb81fc40afcce6926c147cc2b35d\" returns successfully" Jan 30 13:45:25.243418 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:45:25.243875 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:45:25.243954 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:45:25.250912 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:45:25.251125 systemd[1]: cri-containerd-99b3ef28274a5d5e57641a840c42a58ece5dfb81fc40afcce6926c147cc2b35d.scope: Deactivated successfully. Jan 30 13:45:25.268570 containerd[1467]: time="2025-01-30T13:45:25.268503822Z" level=info msg="shim disconnected" id=99b3ef28274a5d5e57641a840c42a58ece5dfb81fc40afcce6926c147cc2b35d namespace=k8s.io Jan 30 13:45:25.268570 containerd[1467]: time="2025-01-30T13:45:25.268555891Z" level=warning msg="cleaning up after shim disconnected" id=99b3ef28274a5d5e57641a840c42a58ece5dfb81fc40afcce6926c147cc2b35d namespace=k8s.io Jan 30 13:45:25.268570 containerd[1467]: time="2025-01-30T13:45:25.268564838Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:45:25.269279 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:45:25.942859 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99b3ef28274a5d5e57641a840c42a58ece5dfb81fc40afcce6926c147cc2b35d-rootfs.mount: Deactivated successfully. Jan 30 13:45:26.158711 kubelet[2514]: E0130 13:45:26.158645 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:26.161041 containerd[1467]: time="2025-01-30T13:45:26.160545204Z" level=info msg="CreateContainer within sandbox \"cc41cd2efe03f28d4e5c2610c68c0f53a58c9b84e7aaa7762b6d95f24da13d08\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:45:26.195252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount232359655.mount: Deactivated successfully. Jan 30 13:45:26.203723 containerd[1467]: time="2025-01-30T13:45:26.203671988Z" level=info msg="CreateContainer within sandbox \"cc41cd2efe03f28d4e5c2610c68c0f53a58c9b84e7aaa7762b6d95f24da13d08\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7cc233632d68044d77d6dabd27c53fd6fcb49d0bdce13f45549f408ad1fdbf5d\"" Jan 30 13:45:26.204286 containerd[1467]: time="2025-01-30T13:45:26.204251882Z" level=info msg="StartContainer for \"7cc233632d68044d77d6dabd27c53fd6fcb49d0bdce13f45549f408ad1fdbf5d\"" Jan 30 13:45:26.239781 systemd[1]: Started cri-containerd-7cc233632d68044d77d6dabd27c53fd6fcb49d0bdce13f45549f408ad1fdbf5d.scope - libcontainer container 7cc233632d68044d77d6dabd27c53fd6fcb49d0bdce13f45549f408ad1fdbf5d. Jan 30 13:45:26.265803 containerd[1467]: time="2025-01-30T13:45:26.265759434Z" level=info msg="StartContainer for \"7cc233632d68044d77d6dabd27c53fd6fcb49d0bdce13f45549f408ad1fdbf5d\" returns successfully" Jan 30 13:45:26.267476 systemd[1]: cri-containerd-7cc233632d68044d77d6dabd27c53fd6fcb49d0bdce13f45549f408ad1fdbf5d.scope: Deactivated successfully. Jan 30 13:45:26.291799 containerd[1467]: time="2025-01-30T13:45:26.291748536Z" level=info msg="shim disconnected" id=7cc233632d68044d77d6dabd27c53fd6fcb49d0bdce13f45549f408ad1fdbf5d namespace=k8s.io Jan 30 13:45:26.291799 containerd[1467]: time="2025-01-30T13:45:26.291796175Z" level=warning msg="cleaning up after shim disconnected" id=7cc233632d68044d77d6dabd27c53fd6fcb49d0bdce13f45549f408ad1fdbf5d namespace=k8s.io Jan 30 13:45:26.291979 containerd[1467]: time="2025-01-30T13:45:26.291805613Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:45:26.942736 systemd[1]: run-containerd-runc-k8s.io-7cc233632d68044d77d6dabd27c53fd6fcb49d0bdce13f45549f408ad1fdbf5d-runc.7fCsCD.mount: Deactivated successfully. Jan 30 13:45:26.942852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cc233632d68044d77d6dabd27c53fd6fcb49d0bdce13f45549f408ad1fdbf5d-rootfs.mount: Deactivated successfully. Jan 30 13:45:27.162220 kubelet[2514]: E0130 13:45:27.162176 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:27.164130 containerd[1467]: time="2025-01-30T13:45:27.164080429Z" level=info msg="CreateContainer within sandbox \"cc41cd2efe03f28d4e5c2610c68c0f53a58c9b84e7aaa7762b6d95f24da13d08\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:45:27.185328 containerd[1467]: time="2025-01-30T13:45:27.185278486Z" level=info msg="CreateContainer within sandbox \"cc41cd2efe03f28d4e5c2610c68c0f53a58c9b84e7aaa7762b6d95f24da13d08\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"28cf516ebfe47c2a1980adf74944dbce649a76ba6c16362c97023a397b75bc0e\"" Jan 30 13:45:27.185805 containerd[1467]: time="2025-01-30T13:45:27.185782165Z" level=info msg="StartContainer for \"28cf516ebfe47c2a1980adf74944dbce649a76ba6c16362c97023a397b75bc0e\"" Jan 30 13:45:27.213765 systemd[1]: Started cri-containerd-28cf516ebfe47c2a1980adf74944dbce649a76ba6c16362c97023a397b75bc0e.scope - libcontainer container 28cf516ebfe47c2a1980adf74944dbce649a76ba6c16362c97023a397b75bc0e. Jan 30 13:45:27.235233 systemd[1]: cri-containerd-28cf516ebfe47c2a1980adf74944dbce649a76ba6c16362c97023a397b75bc0e.scope: Deactivated successfully. Jan 30 13:45:27.236797 containerd[1467]: time="2025-01-30T13:45:27.236761447Z" level=info msg="StartContainer for \"28cf516ebfe47c2a1980adf74944dbce649a76ba6c16362c97023a397b75bc0e\" returns successfully" Jan 30 13:45:27.258454 containerd[1467]: time="2025-01-30T13:45:27.258394525Z" level=info msg="shim disconnected" id=28cf516ebfe47c2a1980adf74944dbce649a76ba6c16362c97023a397b75bc0e namespace=k8s.io Jan 30 13:45:27.258454 containerd[1467]: time="2025-01-30T13:45:27.258449819Z" level=warning msg="cleaning up after shim disconnected" id=28cf516ebfe47c2a1980adf74944dbce649a76ba6c16362c97023a397b75bc0e namespace=k8s.io Jan 30 13:45:27.258454 containerd[1467]: time="2025-01-30T13:45:27.258459638Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:45:27.943239 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28cf516ebfe47c2a1980adf74944dbce649a76ba6c16362c97023a397b75bc0e-rootfs.mount: Deactivated successfully. Jan 30 13:45:28.165496 kubelet[2514]: E0130 13:45:28.165463 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:28.167726 containerd[1467]: time="2025-01-30T13:45:28.167686571Z" level=info msg="CreateContainer within sandbox \"cc41cd2efe03f28d4e5c2610c68c0f53a58c9b84e7aaa7762b6d95f24da13d08\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:45:28.186702 containerd[1467]: time="2025-01-30T13:45:28.186665329Z" level=info msg="CreateContainer within sandbox \"cc41cd2efe03f28d4e5c2610c68c0f53a58c9b84e7aaa7762b6d95f24da13d08\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4265d98f0591cfaa4e6c0a46e20c375b91525753129beee0acf844d7a72c51ba\"" Jan 30 13:45:28.187156 containerd[1467]: time="2025-01-30T13:45:28.187121239Z" level=info msg="StartContainer for \"4265d98f0591cfaa4e6c0a46e20c375b91525753129beee0acf844d7a72c51ba\"" Jan 30 13:45:28.216765 systemd[1]: Started cri-containerd-4265d98f0591cfaa4e6c0a46e20c375b91525753129beee0acf844d7a72c51ba.scope - libcontainer container 4265d98f0591cfaa4e6c0a46e20c375b91525753129beee0acf844d7a72c51ba. Jan 30 13:45:28.244060 containerd[1467]: time="2025-01-30T13:45:28.243978798Z" level=info msg="StartContainer for \"4265d98f0591cfaa4e6c0a46e20c375b91525753129beee0acf844d7a72c51ba\" returns successfully" Jan 30 13:45:28.392133 kubelet[2514]: I0130 13:45:28.391955 2514 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 30 13:45:28.417559 systemd[1]: Created slice kubepods-burstable-podf4d2f065_62c9_4f9d_ba9e_d54969c59944.slice - libcontainer container kubepods-burstable-podf4d2f065_62c9_4f9d_ba9e_d54969c59944.slice. Jan 30 13:45:28.423407 systemd[1]: Created slice kubepods-burstable-podd85b47cb_bc5e_43d6_8fe6_3fb2909bb75d.slice - libcontainer container kubepods-burstable-podd85b47cb_bc5e_43d6_8fe6_3fb2909bb75d.slice. Jan 30 13:45:28.455252 kubelet[2514]: I0130 13:45:28.455203 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqds8\" (UniqueName: \"kubernetes.io/projected/f4d2f065-62c9-4f9d-ba9e-d54969c59944-kube-api-access-hqds8\") pod \"coredns-668d6bf9bc-jvq49\" (UID: \"f4d2f065-62c9-4f9d-ba9e-d54969c59944\") " pod="kube-system/coredns-668d6bf9bc-jvq49" Jan 30 13:45:28.455252 kubelet[2514]: I0130 13:45:28.455244 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d85b47cb-bc5e-43d6-8fe6-3fb2909bb75d-config-volume\") pod \"coredns-668d6bf9bc-lhjdl\" (UID: \"d85b47cb-bc5e-43d6-8fe6-3fb2909bb75d\") " pod="kube-system/coredns-668d6bf9bc-lhjdl" Jan 30 13:45:28.455252 kubelet[2514]: I0130 13:45:28.455263 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4d2f065-62c9-4f9d-ba9e-d54969c59944-config-volume\") pod \"coredns-668d6bf9bc-jvq49\" (UID: \"f4d2f065-62c9-4f9d-ba9e-d54969c59944\") " pod="kube-system/coredns-668d6bf9bc-jvq49" Jan 30 13:45:28.455439 kubelet[2514]: I0130 13:45:28.455279 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-598ll\" (UniqueName: \"kubernetes.io/projected/d85b47cb-bc5e-43d6-8fe6-3fb2909bb75d-kube-api-access-598ll\") pod \"coredns-668d6bf9bc-lhjdl\" (UID: \"d85b47cb-bc5e-43d6-8fe6-3fb2909bb75d\") " pod="kube-system/coredns-668d6bf9bc-lhjdl" Jan 30 13:45:28.721150 kubelet[2514]: E0130 13:45:28.721110 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:28.721791 containerd[1467]: time="2025-01-30T13:45:28.721736973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jvq49,Uid:f4d2f065-62c9-4f9d-ba9e-d54969c59944,Namespace:kube-system,Attempt:0,}" Jan 30 13:45:28.726395 kubelet[2514]: E0130 13:45:28.726366 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:28.726695 containerd[1467]: time="2025-01-30T13:45:28.726665185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lhjdl,Uid:d85b47cb-bc5e-43d6-8fe6-3fb2909bb75d,Namespace:kube-system,Attempt:0,}" Jan 30 13:45:28.947081 systemd[1]: run-containerd-runc-k8s.io-4265d98f0591cfaa4e6c0a46e20c375b91525753129beee0acf844d7a72c51ba-runc.6MTwIc.mount: Deactivated successfully. Jan 30 13:45:29.170466 kubelet[2514]: E0130 13:45:29.170438 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:29.184405 kubelet[2514]: I0130 13:45:29.184017 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vddvs" podStartSLOduration=6.8834109869999995 podStartE2EDuration="18.183991476s" podCreationTimestamp="2025-01-30 13:45:11 +0000 UTC" firstStartedPulling="2025-01-30 13:45:12.631325309 +0000 UTC m=+7.596723659" lastFinishedPulling="2025-01-30 13:45:23.931905798 +0000 UTC m=+18.897304148" observedRunningTime="2025-01-30 13:45:29.18366472 +0000 UTC m=+24.149063060" watchObservedRunningTime="2025-01-30 13:45:29.183991476 +0000 UTC m=+24.149389826" Jan 30 13:45:30.171486 kubelet[2514]: E0130 13:45:30.171439 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:30.376002 systemd-networkd[1398]: cilium_host: Link UP Jan 30 13:45:30.376233 systemd-networkd[1398]: cilium_net: Link UP Jan 30 13:45:30.377347 systemd-networkd[1398]: cilium_net: Gained carrier Jan 30 13:45:30.377585 systemd-networkd[1398]: cilium_host: Gained carrier Jan 30 13:45:30.377788 systemd-networkd[1398]: cilium_net: Gained IPv6LL Jan 30 13:45:30.378004 systemd-networkd[1398]: cilium_host: Gained IPv6LL Jan 30 13:45:30.479162 systemd-networkd[1398]: cilium_vxlan: Link UP Jan 30 13:45:30.479172 systemd-networkd[1398]: cilium_vxlan: Gained carrier Jan 30 13:45:30.685703 kernel: NET: Registered PF_ALG protocol family Jan 30 13:45:31.173503 kubelet[2514]: E0130 13:45:31.173475 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:31.313848 systemd-networkd[1398]: lxc_health: Link UP Jan 30 13:45:31.326472 systemd-networkd[1398]: lxc_health: Gained carrier Jan 30 13:45:31.772148 systemd-networkd[1398]: lxc4d7d309c7366: Link UP Jan 30 13:45:31.778584 systemd-networkd[1398]: lxc8b54ddbdcddc: Link UP Jan 30 13:45:31.786662 kernel: eth0: renamed from tmp0a76c Jan 30 13:45:31.793655 kernel: eth0: renamed from tmpe8504 Jan 30 13:45:31.803134 systemd-networkd[1398]: lxc4d7d309c7366: Gained carrier Jan 30 13:45:31.803542 systemd-networkd[1398]: lxc8b54ddbdcddc: Gained carrier Jan 30 13:45:32.170778 systemd-networkd[1398]: cilium_vxlan: Gained IPv6LL Jan 30 13:45:32.562395 kubelet[2514]: E0130 13:45:32.562311 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:32.618788 systemd-networkd[1398]: lxc_health: Gained IPv6LL Jan 30 13:45:32.874788 systemd-networkd[1398]: lxc8b54ddbdcddc: Gained IPv6LL Jan 30 13:45:33.194851 systemd-networkd[1398]: lxc4d7d309c7366: Gained IPv6LL Jan 30 13:45:33.293678 systemd[1]: Started sshd@9-10.0.0.97:22-10.0.0.1:43676.service - OpenSSH per-connection server daemon (10.0.0.1:43676). Jan 30 13:45:33.335173 sshd[3747]: Accepted publickey for core from 10.0.0.1 port 43676 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:33.336974 sshd[3747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:33.340898 systemd-logind[1454]: New session 10 of user core. Jan 30 13:45:33.347753 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:45:33.487989 sshd[3747]: pam_unix(sshd:session): session closed for user core Jan 30 13:45:33.492031 systemd[1]: sshd@9-10.0.0.97:22-10.0.0.1:43676.service: Deactivated successfully. Jan 30 13:45:33.494213 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:45:33.494931 systemd-logind[1454]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:45:33.495827 systemd-logind[1454]: Removed session 10. Jan 30 13:45:35.283046 containerd[1467]: time="2025-01-30T13:45:35.282949118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:45:35.283046 containerd[1467]: time="2025-01-30T13:45:35.282998392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:45:35.283046 containerd[1467]: time="2025-01-30T13:45:35.283009122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:45:35.283488 containerd[1467]: time="2025-01-30T13:45:35.283086878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:45:35.287638 containerd[1467]: time="2025-01-30T13:45:35.284389369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:45:35.287638 containerd[1467]: time="2025-01-30T13:45:35.284495217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:45:35.287638 containerd[1467]: time="2025-01-30T13:45:35.284522549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:45:35.287638 containerd[1467]: time="2025-01-30T13:45:35.284649177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:45:35.308765 systemd[1]: Started cri-containerd-0a76c295a9f90acc46737230566a03342ba5dd91ae39010f0d9c6362f9d8e5e9.scope - libcontainer container 0a76c295a9f90acc46737230566a03342ba5dd91ae39010f0d9c6362f9d8e5e9. Jan 30 13:45:35.310750 systemd[1]: Started cri-containerd-e850417aac3b886937f64f817b2093c2fc144e34ae7b95876eb9406e81f14b8b.scope - libcontainer container e850417aac3b886937f64f817b2093c2fc144e34ae7b95876eb9406e81f14b8b. Jan 30 13:45:35.320439 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:45:35.323439 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:45:35.343850 containerd[1467]: time="2025-01-30T13:45:35.343806472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jvq49,Uid:f4d2f065-62c9-4f9d-ba9e-d54969c59944,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a76c295a9f90acc46737230566a03342ba5dd91ae39010f0d9c6362f9d8e5e9\"" Jan 30 13:45:35.344550 kubelet[2514]: E0130 13:45:35.344514 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:35.346372 containerd[1467]: time="2025-01-30T13:45:35.346332905Z" level=info msg="CreateContainer within sandbox \"0a76c295a9f90acc46737230566a03342ba5dd91ae39010f0d9c6362f9d8e5e9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:45:35.348663 containerd[1467]: time="2025-01-30T13:45:35.348614949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lhjdl,Uid:d85b47cb-bc5e-43d6-8fe6-3fb2909bb75d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e850417aac3b886937f64f817b2093c2fc144e34ae7b95876eb9406e81f14b8b\"" Jan 30 13:45:35.349231 kubelet[2514]: E0130 13:45:35.349196 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:35.357205 containerd[1467]: time="2025-01-30T13:45:35.357165076Z" level=info msg="CreateContainer within sandbox \"e850417aac3b886937f64f817b2093c2fc144e34ae7b95876eb9406e81f14b8b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:45:35.373295 containerd[1467]: time="2025-01-30T13:45:35.373245761Z" level=info msg="CreateContainer within sandbox \"0a76c295a9f90acc46737230566a03342ba5dd91ae39010f0d9c6362f9d8e5e9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4ca0816d376481b8f87d888c89c12b80f3521602c67163ce91d3d17674a2f79b\"" Jan 30 13:45:35.373807 containerd[1467]: time="2025-01-30T13:45:35.373772523Z" level=info msg="StartContainer for \"4ca0816d376481b8f87d888c89c12b80f3521602c67163ce91d3d17674a2f79b\"" Jan 30 13:45:35.376029 containerd[1467]: time="2025-01-30T13:45:35.375995796Z" level=info msg="CreateContainer within sandbox \"e850417aac3b886937f64f817b2093c2fc144e34ae7b95876eb9406e81f14b8b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"63bea545a7658a25cabbfc3fb35bbf8f105464c2fe1493acba145616af217003\"" Jan 30 13:45:35.376510 containerd[1467]: time="2025-01-30T13:45:35.376478554Z" level=info msg="StartContainer for \"63bea545a7658a25cabbfc3fb35bbf8f105464c2fe1493acba145616af217003\"" Jan 30 13:45:35.403758 systemd[1]: Started cri-containerd-4ca0816d376481b8f87d888c89c12b80f3521602c67163ce91d3d17674a2f79b.scope - libcontainer container 4ca0816d376481b8f87d888c89c12b80f3521602c67163ce91d3d17674a2f79b. Jan 30 13:45:35.407197 systemd[1]: Started cri-containerd-63bea545a7658a25cabbfc3fb35bbf8f105464c2fe1493acba145616af217003.scope - libcontainer container 63bea545a7658a25cabbfc3fb35bbf8f105464c2fe1493acba145616af217003. Jan 30 13:45:35.435237 containerd[1467]: time="2025-01-30T13:45:35.435179981Z" level=info msg="StartContainer for \"4ca0816d376481b8f87d888c89c12b80f3521602c67163ce91d3d17674a2f79b\" returns successfully" Jan 30 13:45:35.435364 containerd[1467]: time="2025-01-30T13:45:35.435200840Z" level=info msg="StartContainer for \"63bea545a7658a25cabbfc3fb35bbf8f105464c2fe1493acba145616af217003\" returns successfully" Jan 30 13:45:36.185168 kubelet[2514]: E0130 13:45:36.184573 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:36.189887 kubelet[2514]: E0130 13:45:36.189594 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:36.207710 kubelet[2514]: I0130 13:45:36.207649 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jvq49" podStartSLOduration=24.207616105 podStartE2EDuration="24.207616105s" podCreationTimestamp="2025-01-30 13:45:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:45:36.198444633 +0000 UTC m=+31.163842993" watchObservedRunningTime="2025-01-30 13:45:36.207616105 +0000 UTC m=+31.173014456" Jan 30 13:45:36.217963 kubelet[2514]: I0130 13:45:36.217887 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lhjdl" podStartSLOduration=24.217868602 podStartE2EDuration="24.217868602s" podCreationTimestamp="2025-01-30 13:45:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:45:36.217489579 +0000 UTC m=+31.182887939" watchObservedRunningTime="2025-01-30 13:45:36.217868602 +0000 UTC m=+31.183266952" Jan 30 13:45:36.288014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2645876816.mount: Deactivated successfully. Jan 30 13:45:37.190907 kubelet[2514]: E0130 13:45:37.190872 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:37.190907 kubelet[2514]: E0130 13:45:37.190926 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:37.527903 kubelet[2514]: I0130 13:45:37.527766 2514 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:45:37.529256 kubelet[2514]: E0130 13:45:37.529209 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:38.192699 kubelet[2514]: E0130 13:45:38.192644 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:38.193140 kubelet[2514]: E0130 13:45:38.192776 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:38.193140 kubelet[2514]: E0130 13:45:38.192865 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:45:38.501763 systemd[1]: Started sshd@10-10.0.0.97:22-10.0.0.1:43686.service - OpenSSH per-connection server daemon (10.0.0.1:43686). Jan 30 13:45:38.543506 sshd[3939]: Accepted publickey for core from 10.0.0.1 port 43686 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:38.545047 sshd[3939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:38.548843 systemd-logind[1454]: New session 11 of user core. Jan 30 13:45:38.558745 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:45:38.722254 sshd[3939]: pam_unix(sshd:session): session closed for user core Jan 30 13:45:38.726191 systemd[1]: sshd@10-10.0.0.97:22-10.0.0.1:43686.service: Deactivated successfully. Jan 30 13:45:38.727973 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:45:38.728686 systemd-logind[1454]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:45:38.729476 systemd-logind[1454]: Removed session 11. Jan 30 13:45:43.732272 systemd[1]: Started sshd@11-10.0.0.97:22-10.0.0.1:32968.service - OpenSSH per-connection server daemon (10.0.0.1:32968). Jan 30 13:45:43.769758 sshd[3957]: Accepted publickey for core from 10.0.0.1 port 32968 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:43.771097 sshd[3957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:43.774544 systemd-logind[1454]: New session 12 of user core. Jan 30 13:45:43.783736 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:45:43.903016 sshd[3957]: pam_unix(sshd:session): session closed for user core Jan 30 13:45:43.906554 systemd[1]: sshd@11-10.0.0.97:22-10.0.0.1:32968.service: Deactivated successfully. Jan 30 13:45:43.908448 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:45:43.909085 systemd-logind[1454]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:45:43.909901 systemd-logind[1454]: Removed session 12. Jan 30 13:45:48.922153 systemd[1]: Started sshd@12-10.0.0.97:22-10.0.0.1:32976.service - OpenSSH per-connection server daemon (10.0.0.1:32976). Jan 30 13:45:48.964315 sshd[3973]: Accepted publickey for core from 10.0.0.1 port 32976 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:48.965808 sshd[3973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:48.969587 systemd-logind[1454]: New session 13 of user core. Jan 30 13:45:48.982737 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:45:49.100953 sshd[3973]: pam_unix(sshd:session): session closed for user core Jan 30 13:45:49.112502 systemd[1]: sshd@12-10.0.0.97:22-10.0.0.1:32976.service: Deactivated successfully. Jan 30 13:45:49.114323 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:45:49.115805 systemd-logind[1454]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:45:49.117156 systemd[1]: Started sshd@13-10.0.0.97:22-10.0.0.1:32988.service - OpenSSH per-connection server daemon (10.0.0.1:32988). Jan 30 13:45:49.117916 systemd-logind[1454]: Removed session 13. Jan 30 13:45:49.156671 sshd[3988]: Accepted publickey for core from 10.0.0.1 port 32988 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:49.158314 sshd[3988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:49.162028 systemd-logind[1454]: New session 14 of user core. Jan 30 13:45:49.172752 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:45:49.330720 sshd[3988]: pam_unix(sshd:session): session closed for user core Jan 30 13:45:49.340348 systemd[1]: sshd@13-10.0.0.97:22-10.0.0.1:32988.service: Deactivated successfully. Jan 30 13:45:49.342455 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:45:49.345878 systemd-logind[1454]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:45:49.353113 systemd[1]: Started sshd@14-10.0.0.97:22-10.0.0.1:33004.service - OpenSSH per-connection server daemon (10.0.0.1:33004). Jan 30 13:45:49.354108 systemd-logind[1454]: Removed session 14. Jan 30 13:45:49.389222 sshd[4001]: Accepted publickey for core from 10.0.0.1 port 33004 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:49.390970 sshd[4001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:49.395249 systemd-logind[1454]: New session 15 of user core. Jan 30 13:45:49.411787 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:45:49.526982 sshd[4001]: pam_unix(sshd:session): session closed for user core Jan 30 13:45:49.530862 systemd[1]: sshd@14-10.0.0.97:22-10.0.0.1:33004.service: Deactivated successfully. Jan 30 13:45:49.532973 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:45:49.533615 systemd-logind[1454]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:45:49.534526 systemd-logind[1454]: Removed session 15. Jan 30 13:45:54.538188 systemd[1]: Started sshd@15-10.0.0.97:22-10.0.0.1:52584.service - OpenSSH per-connection server daemon (10.0.0.1:52584). Jan 30 13:45:54.576923 sshd[4015]: Accepted publickey for core from 10.0.0.1 port 52584 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:54.578418 sshd[4015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:54.582240 systemd-logind[1454]: New session 16 of user core. Jan 30 13:45:54.592759 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:45:54.702582 sshd[4015]: pam_unix(sshd:session): session closed for user core Jan 30 13:45:54.706135 systemd[1]: sshd@15-10.0.0.97:22-10.0.0.1:52584.service: Deactivated successfully. Jan 30 13:45:54.708185 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:45:54.708875 systemd-logind[1454]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:45:54.710005 systemd-logind[1454]: Removed session 16. Jan 30 13:45:59.715670 systemd[1]: Started sshd@16-10.0.0.97:22-10.0.0.1:52598.service - OpenSSH per-connection server daemon (10.0.0.1:52598). Jan 30 13:45:59.757411 sshd[4030]: Accepted publickey for core from 10.0.0.1 port 52598 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:59.759138 sshd[4030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:59.763489 systemd-logind[1454]: New session 17 of user core. Jan 30 13:45:59.784779 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:45:59.891097 sshd[4030]: pam_unix(sshd:session): session closed for user core Jan 30 13:45:59.905521 systemd[1]: sshd@16-10.0.0.97:22-10.0.0.1:52598.service: Deactivated successfully. Jan 30 13:45:59.907423 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:45:59.909311 systemd-logind[1454]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:45:59.917862 systemd[1]: Started sshd@17-10.0.0.97:22-10.0.0.1:52614.service - OpenSSH per-connection server daemon (10.0.0.1:52614). Jan 30 13:45:59.918791 systemd-logind[1454]: Removed session 17. Jan 30 13:45:59.952925 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 52614 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:59.954382 sshd[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:59.958688 systemd-logind[1454]: New session 18 of user core. Jan 30 13:45:59.971854 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:46:00.476684 sshd[4045]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:00.495493 systemd[1]: sshd@17-10.0.0.97:22-10.0.0.1:52614.service: Deactivated successfully. Jan 30 13:46:00.497366 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:46:00.499173 systemd-logind[1454]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:46:00.504874 systemd[1]: Started sshd@18-10.0.0.97:22-10.0.0.1:52624.service - OpenSSH per-connection server daemon (10.0.0.1:52624). Jan 30 13:46:00.505838 systemd-logind[1454]: Removed session 18. Jan 30 13:46:00.541408 sshd[4058]: Accepted publickey for core from 10.0.0.1 port 52624 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:00.542864 sshd[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:00.546477 systemd-logind[1454]: New session 19 of user core. Jan 30 13:46:00.557741 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:46:01.651041 sshd[4058]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:01.658721 systemd[1]: sshd@18-10.0.0.97:22-10.0.0.1:52624.service: Deactivated successfully. Jan 30 13:46:01.660443 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:46:01.661779 systemd-logind[1454]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:46:01.662961 systemd[1]: Started sshd@19-10.0.0.97:22-10.0.0.1:54236.service - OpenSSH per-connection server daemon (10.0.0.1:54236). Jan 30 13:46:01.663838 systemd-logind[1454]: Removed session 19. Jan 30 13:46:01.700339 sshd[4077]: Accepted publickey for core from 10.0.0.1 port 54236 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:01.701722 sshd[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:01.706127 systemd-logind[1454]: New session 20 of user core. Jan 30 13:46:01.711736 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:46:01.967052 sshd[4077]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:01.975380 systemd[1]: sshd@19-10.0.0.97:22-10.0.0.1:54236.service: Deactivated successfully. Jan 30 13:46:01.977186 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:46:01.979223 systemd-logind[1454]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:46:01.990863 systemd[1]: Started sshd@20-10.0.0.97:22-10.0.0.1:54242.service - OpenSSH per-connection server daemon (10.0.0.1:54242). Jan 30 13:46:01.991926 systemd-logind[1454]: Removed session 20. Jan 30 13:46:02.025742 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 54242 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:02.027230 sshd[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:02.031407 systemd-logind[1454]: New session 21 of user core. Jan 30 13:46:02.035761 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:46:02.149701 sshd[4089]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:02.153753 systemd[1]: sshd@20-10.0.0.97:22-10.0.0.1:54242.service: Deactivated successfully. Jan 30 13:46:02.155527 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:46:02.156187 systemd-logind[1454]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:46:02.157027 systemd-logind[1454]: Removed session 21. Jan 30 13:46:07.162498 systemd[1]: Started sshd@21-10.0.0.97:22-10.0.0.1:54244.service - OpenSSH per-connection server daemon (10.0.0.1:54244). Jan 30 13:46:07.200468 sshd[4108]: Accepted publickey for core from 10.0.0.1 port 54244 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:07.201916 sshd[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:07.205534 systemd-logind[1454]: New session 22 of user core. Jan 30 13:46:07.215742 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:46:07.318160 sshd[4108]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:07.321826 systemd[1]: sshd@21-10.0.0.97:22-10.0.0.1:54244.service: Deactivated successfully. Jan 30 13:46:07.323904 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:46:07.324470 systemd-logind[1454]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:46:07.325400 systemd-logind[1454]: Removed session 22. Jan 30 13:46:12.333692 systemd[1]: Started sshd@22-10.0.0.97:22-10.0.0.1:51784.service - OpenSSH per-connection server daemon (10.0.0.1:51784). Jan 30 13:46:12.370953 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 51784 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:12.372331 sshd[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:12.376254 systemd-logind[1454]: New session 23 of user core. Jan 30 13:46:12.384781 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:46:12.491500 sshd[4123]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:12.495069 systemd[1]: sshd@22-10.0.0.97:22-10.0.0.1:51784.service: Deactivated successfully. Jan 30 13:46:12.497084 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:46:12.497808 systemd-logind[1454]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:46:12.498587 systemd-logind[1454]: Removed session 23. Jan 30 13:46:17.503706 systemd[1]: Started sshd@23-10.0.0.97:22-10.0.0.1:51788.service - OpenSSH per-connection server daemon (10.0.0.1:51788). Jan 30 13:46:17.542537 sshd[4139]: Accepted publickey for core from 10.0.0.1 port 51788 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:17.544026 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:17.547690 systemd-logind[1454]: New session 24 of user core. Jan 30 13:46:17.553782 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:46:17.651772 sshd[4139]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:17.655161 systemd[1]: sshd@23-10.0.0.97:22-10.0.0.1:51788.service: Deactivated successfully. Jan 30 13:46:17.656903 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:46:17.657477 systemd-logind[1454]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:46:17.658498 systemd-logind[1454]: Removed session 24. Jan 30 13:46:22.671048 systemd[1]: Started sshd@24-10.0.0.97:22-10.0.0.1:57842.service - OpenSSH per-connection server daemon (10.0.0.1:57842). Jan 30 13:46:22.710000 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 57842 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:22.711588 sshd[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:22.715307 systemd-logind[1454]: New session 25 of user core. Jan 30 13:46:22.721759 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:46:22.829412 sshd[4154]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:22.840715 systemd[1]: sshd@24-10.0.0.97:22-10.0.0.1:57842.service: Deactivated successfully. Jan 30 13:46:22.842891 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:46:22.844423 systemd-logind[1454]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:46:22.849955 systemd[1]: Started sshd@25-10.0.0.97:22-10.0.0.1:57846.service - OpenSSH per-connection server daemon (10.0.0.1:57846). Jan 30 13:46:22.851237 systemd-logind[1454]: Removed session 25. Jan 30 13:46:22.884041 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 57846 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:22.885466 sshd[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:22.889725 systemd-logind[1454]: New session 26 of user core. Jan 30 13:46:22.903778 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 13:46:23.108543 kubelet[2514]: E0130 13:46:23.107497 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:24.230906 containerd[1467]: time="2025-01-30T13:46:24.230835322Z" level=info msg="StopContainer for \"f021f4fe0540263c6bc3750da52eebe6f74c9d5f7652023f8a786bf311055960\" with timeout 30 (s)" Jan 30 13:46:24.231484 containerd[1467]: time="2025-01-30T13:46:24.231233613Z" level=info msg="Stop container \"f021f4fe0540263c6bc3750da52eebe6f74c9d5f7652023f8a786bf311055960\" with signal terminated" Jan 30 13:46:24.254090 systemd[1]: cri-containerd-f021f4fe0540263c6bc3750da52eebe6f74c9d5f7652023f8a786bf311055960.scope: Deactivated successfully. Jan 30 13:46:24.281026 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f021f4fe0540263c6bc3750da52eebe6f74c9d5f7652023f8a786bf311055960-rootfs.mount: Deactivated successfully. Jan 30 13:46:24.284192 containerd[1467]: time="2025-01-30T13:46:24.284124549Z" level=info msg="StopContainer for \"4265d98f0591cfaa4e6c0a46e20c375b91525753129beee0acf844d7a72c51ba\" with timeout 2 (s)" Jan 30 13:46:24.284358 containerd[1467]: time="2025-01-30T13:46:24.284335562Z" level=info msg="Stop container \"4265d98f0591cfaa4e6c0a46e20c375b91525753129beee0acf844d7a72c51ba\" with signal terminated" Jan 30 13:46:24.284874 containerd[1467]: time="2025-01-30T13:46:24.284834876Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:46:24.289421 containerd[1467]: time="2025-01-30T13:46:24.289366342Z" level=info msg="shim disconnected" id=f021f4fe0540263c6bc3750da52eebe6f74c9d5f7652023f8a786bf311055960 namespace=k8s.io Jan 30 13:46:24.289421 containerd[1467]: time="2025-01-30T13:46:24.289407371Z" level=warning msg="cleaning up after shim disconnected" id=f021f4fe0540263c6bc3750da52eebe6f74c9d5f7652023f8a786bf311055960 namespace=k8s.io Jan 30 13:46:24.289421 containerd[1467]: time="2025-01-30T13:46:24.289415056Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:46:24.292010 systemd-networkd[1398]: lxc_health: Link DOWN Jan 30 13:46:24.292017 systemd-networkd[1398]: lxc_health: Lost carrier Jan 30 13:46:24.309663 containerd[1467]: time="2025-01-30T13:46:24.309607969Z" level=info msg="StopContainer for \"f021f4fe0540263c6bc3750da52eebe6f74c9d5f7652023f8a786bf311055960\" returns successfully" Jan 30 13:46:24.313478 containerd[1467]: time="2025-01-30T13:46:24.313196634Z" level=info msg="StopPodSandbox for \"f8ea1cb263101cb5d3bd439525284266462eda133def1f954d27ba3caa628dfb\"" Jan 30 13:46:24.313478 containerd[1467]: time="2025-01-30T13:46:24.313248824Z" level=info msg="Container to stop \"f021f4fe0540263c6bc3750da52eebe6f74c9d5f7652023f8a786bf311055960\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:46:24.315802 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f8ea1cb263101cb5d3bd439525284266462eda133def1f954d27ba3caa628dfb-shm.mount: Deactivated successfully. Jan 30 13:46:24.316487 systemd[1]: cri-containerd-4265d98f0591cfaa4e6c0a46e20c375b91525753129beee0acf844d7a72c51ba.scope: Deactivated successfully. Jan 30 13:46:24.316784 systemd[1]: cri-containerd-4265d98f0591cfaa4e6c0a46e20c375b91525753129beee0acf844d7a72c51ba.scope: Consumed 6.725s CPU time. Jan 30 13:46:24.328223 systemd[1]: cri-containerd-f8ea1cb263101cb5d3bd439525284266462eda133def1f954d27ba3caa628dfb.scope: Deactivated successfully. Jan 30 13:46:24.338467 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4265d98f0591cfaa4e6c0a46e20c375b91525753129beee0acf844d7a72c51ba-rootfs.mount: Deactivated successfully. Jan 30 13:46:24.345650 containerd[1467]: time="2025-01-30T13:46:24.345585817Z" level=info msg="shim disconnected" id=4265d98f0591cfaa4e6c0a46e20c375b91525753129beee0acf844d7a72c51ba namespace=k8s.io Jan 30 13:46:24.345985 containerd[1467]: time="2025-01-30T13:46:24.345839762Z" level=warning msg="cleaning up after shim disconnected" id=4265d98f0591cfaa4e6c0a46e20c375b91525753129beee0acf844d7a72c51ba namespace=k8s.io Jan 30 13:46:24.345985 containerd[1467]: time="2025-01-30T13:46:24.345855431Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:46:24.356683 containerd[1467]: time="2025-01-30T13:46:24.356570961Z" level=info msg="shim disconnected" id=f8ea1cb263101cb5d3bd439525284266462eda133def1f954d27ba3caa628dfb namespace=k8s.io Jan 30 13:46:24.356683 containerd[1467]: time="2025-01-30T13:46:24.356677455Z" level=warning msg="cleaning up after shim disconnected" id=f8ea1cb263101cb5d3bd439525284266462eda133def1f954d27ba3caa628dfb namespace=k8s.io Jan 30 13:46:24.356906 containerd[1467]: time="2025-01-30T13:46:24.356692914Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:46:24.364692 containerd[1467]: time="2025-01-30T13:46:24.364588517Z" level=info msg="StopContainer for \"4265d98f0591cfaa4e6c0a46e20c375b91525753129beee0acf844d7a72c51ba\" returns successfully" Jan 30 13:46:24.367211 containerd[1467]: time="2025-01-30T13:46:24.367186179Z" level=info msg="StopPodSandbox for \"cc41cd2efe03f28d4e5c2610c68c0f53a58c9b84e7aaa7762b6d95f24da13d08\"" Jan 30 13:46:24.367270 containerd[1467]: time="2025-01-30T13:46:24.367224542Z" level=info msg="Container to stop \"9460849ffcb415b6ef1d48efc9727239d049580c2f0ea02685daa5332d5d9837\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:46:24.367270 containerd[1467]: time="2025-01-30T13:46:24.367238169Z" level=info msg="Container to stop \"7cc233632d68044d77d6dabd27c53fd6fcb49d0bdce13f45549f408ad1fdbf5d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:46:24.367270 containerd[1467]: time="2025-01-30T13:46:24.367247296Z" level=info msg="Container to stop \"28cf516ebfe47c2a1980adf74944dbce649a76ba6c16362c97023a397b75bc0e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:46:24.367270 containerd[1467]: time="2025-01-30T13:46:24.367256494Z" level=info msg="Container to stop \"4265d98f0591cfaa4e6c0a46e20c375b91525753129beee0acf844d7a72c51ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:46:24.367270 containerd[1467]: time="2025-01-30T13:46:24.367266062Z" level=info msg="Container to stop \"99b3ef28274a5d5e57641a840c42a58ece5dfb81fc40afcce6926c147cc2b35d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:46:24.373178 systemd[1]: cri-containerd-cc41cd2efe03f28d4e5c2610c68c0f53a58c9b84e7aaa7762b6d95f24da13d08.scope: Deactivated successfully. Jan 30 13:46:24.384896 containerd[1467]: time="2025-01-30T13:46:24.384855512Z" level=info msg="TearDown network for sandbox \"f8ea1cb263101cb5d3bd439525284266462eda133def1f954d27ba3caa628dfb\" successfully" Jan 30 13:46:24.384896 containerd[1467]: time="2025-01-30T13:46:24.384889778Z" level=info msg="StopPodSandbox for \"f8ea1cb263101cb5d3bd439525284266462eda133def1f954d27ba3caa628dfb\" returns successfully" Jan 30 13:46:24.402942 containerd[1467]: time="2025-01-30T13:46:24.402860587Z" level=info msg="shim disconnected" id=cc41cd2efe03f28d4e5c2610c68c0f53a58c9b84e7aaa7762b6d95f24da13d08 namespace=k8s.io Jan 30 13:46:24.402942 containerd[1467]: time="2025-01-30T13:46:24.402932624Z" level=warning msg="cleaning up after shim disconnected" id=cc41cd2efe03f28d4e5c2610c68c0f53a58c9b84e7aaa7762b6d95f24da13d08 namespace=k8s.io Jan 30 13:46:24.402942 containerd[1467]: time="2025-01-30T13:46:24.402941582Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:46:24.428293 containerd[1467]: time="2025-01-30T13:46:24.428226032Z" level=info msg="TearDown network for sandbox \"cc41cd2efe03f28d4e5c2610c68c0f53a58c9b84e7aaa7762b6d95f24da13d08\" successfully" Jan 30 13:46:24.428293 containerd[1467]: time="2025-01-30T13:46:24.428273924Z" level=info msg="StopPodSandbox for \"cc41cd2efe03f28d4e5c2610c68c0f53a58c9b84e7aaa7762b6d95f24da13d08\" returns successfully" Jan 30 13:46:24.470556 kubelet[2514]: I0130 13:46:24.470483 2514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-etc-cni-netd\") pod \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " Jan 30 13:46:24.470556 kubelet[2514]: I0130 13:46:24.470534 2514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-cni-path\") pod \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " Jan 30 13:46:24.470556 kubelet[2514]: I0130 13:46:24.470567 2514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p2hk\" (UniqueName: \"kubernetes.io/projected/f5fa7c00-c1b0-4cb2-8209-3199189f9081-kube-api-access-8p2hk\") pod \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " Jan 30 13:46:24.471345 kubelet[2514]: I0130 13:46:24.470593 2514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f5fa7c00-c1b0-4cb2-8209-3199189f9081-clustermesh-secrets\") pod \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " Jan 30 13:46:24.471345 kubelet[2514]: I0130 13:46:24.470615 2514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f5fa7c00-c1b0-4cb2-8209-3199189f9081-hubble-tls\") pod \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " Jan 30 13:46:24.471345 kubelet[2514]: I0130 13:46:24.470698 2514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2e2fe91-782d-4dc4-b7ed-9b597c17d300-cilium-config-path\") pod \"d2e2fe91-782d-4dc4-b7ed-9b597c17d300\" (UID: \"d2e2fe91-782d-4dc4-b7ed-9b597c17d300\") " Jan 30 13:46:24.471345 kubelet[2514]: I0130 13:46:24.470734 2514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-hostproc\") pod \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " Jan 30 13:46:24.471345 kubelet[2514]: I0130 13:46:24.470753 2514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-lib-modules\") pod \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " Jan 30 13:46:24.471345 kubelet[2514]: I0130 13:46:24.470780 2514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-xtables-lock\") pod \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " Jan 30 13:46:24.471545 kubelet[2514]: I0130 13:46:24.470800 2514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-host-proc-sys-kernel\") pod \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " Jan 30 13:46:24.471545 kubelet[2514]: I0130 13:46:24.470820 2514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-bpf-maps\") pod \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " Jan 30 13:46:24.471545 kubelet[2514]: I0130 13:46:24.470839 2514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f5fa7c00-c1b0-4cb2-8209-3199189f9081-cilium-config-path\") pod \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " Jan 30 13:46:24.471545 kubelet[2514]: I0130 13:46:24.470858 2514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-host-proc-sys-net\") pod \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " Jan 30 13:46:24.471545 kubelet[2514]: I0130 13:46:24.470879 2514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-cilium-cgroup\") pod \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " Jan 30 13:46:24.471545 kubelet[2514]: I0130 13:46:24.470902 2514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-cilium-run\") pod \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\" (UID: \"f5fa7c00-c1b0-4cb2-8209-3199189f9081\") " Jan 30 13:46:24.471763 kubelet[2514]: I0130 13:46:24.470925 2514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8wd9\" (UniqueName: \"kubernetes.io/projected/d2e2fe91-782d-4dc4-b7ed-9b597c17d300-kube-api-access-m8wd9\") pod \"d2e2fe91-782d-4dc4-b7ed-9b597c17d300\" (UID: \"d2e2fe91-782d-4dc4-b7ed-9b597c17d300\") " Jan 30 13:46:24.473999 kubelet[2514]: I0130 13:46:24.470640 2514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f5fa7c00-c1b0-4cb2-8209-3199189f9081" (UID: "f5fa7c00-c1b0-4cb2-8209-3199189f9081"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:46:24.473999 kubelet[2514]: I0130 13:46:24.473606 2514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-cni-path" (OuterVolumeSpecName: "cni-path") pod "f5fa7c00-c1b0-4cb2-8209-3199189f9081" (UID: "f5fa7c00-c1b0-4cb2-8209-3199189f9081"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:46:24.473999 kubelet[2514]: I0130 13:46:24.473650 2514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f5fa7c00-c1b0-4cb2-8209-3199189f9081" (UID: "f5fa7c00-c1b0-4cb2-8209-3199189f9081"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:46:24.474348 kubelet[2514]: I0130 13:46:24.474270 2514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f5fa7c00-c1b0-4cb2-8209-3199189f9081" (UID: "f5fa7c00-c1b0-4cb2-8209-3199189f9081"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:46:24.474348 kubelet[2514]: I0130 13:46:24.474303 2514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f5fa7c00-c1b0-4cb2-8209-3199189f9081" (UID: "f5fa7c00-c1b0-4cb2-8209-3199189f9081"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:46:24.474435 kubelet[2514]: I0130 13:46:24.474375 2514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-hostproc" (OuterVolumeSpecName: "hostproc") pod "f5fa7c00-c1b0-4cb2-8209-3199189f9081" (UID: "f5fa7c00-c1b0-4cb2-8209-3199189f9081"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:46:24.474435 kubelet[2514]: I0130 13:46:24.474400 2514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f5fa7c00-c1b0-4cb2-8209-3199189f9081" (UID: "f5fa7c00-c1b0-4cb2-8209-3199189f9081"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:46:24.474435 kubelet[2514]: I0130 13:46:24.474422 2514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f5fa7c00-c1b0-4cb2-8209-3199189f9081" (UID: "f5fa7c00-c1b0-4cb2-8209-3199189f9081"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:46:24.474536 kubelet[2514]: I0130 13:46:24.474441 2514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f5fa7c00-c1b0-4cb2-8209-3199189f9081" (UID: "f5fa7c00-c1b0-4cb2-8209-3199189f9081"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:46:24.474536 kubelet[2514]: I0130 13:46:24.474464 2514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f5fa7c00-c1b0-4cb2-8209-3199189f9081" (UID: "f5fa7c00-c1b0-4cb2-8209-3199189f9081"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:46:24.475130 kubelet[2514]: I0130 13:46:24.475091 2514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5fa7c00-c1b0-4cb2-8209-3199189f9081-kube-api-access-8p2hk" (OuterVolumeSpecName: "kube-api-access-8p2hk") pod "f5fa7c00-c1b0-4cb2-8209-3199189f9081" (UID: "f5fa7c00-c1b0-4cb2-8209-3199189f9081"). InnerVolumeSpecName "kube-api-access-8p2hk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 13:46:24.475368 kubelet[2514]: I0130 13:46:24.475325 2514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2e2fe91-782d-4dc4-b7ed-9b597c17d300-kube-api-access-m8wd9" (OuterVolumeSpecName: "kube-api-access-m8wd9") pod "d2e2fe91-782d-4dc4-b7ed-9b597c17d300" (UID: "d2e2fe91-782d-4dc4-b7ed-9b597c17d300"). InnerVolumeSpecName "kube-api-access-m8wd9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 13:46:24.475773 kubelet[2514]: I0130 13:46:24.475752 2514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5fa7c00-c1b0-4cb2-8209-3199189f9081-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f5fa7c00-c1b0-4cb2-8209-3199189f9081" (UID: "f5fa7c00-c1b0-4cb2-8209-3199189f9081"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 13:46:24.476159 kubelet[2514]: I0130 13:46:24.476122 2514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5fa7c00-c1b0-4cb2-8209-3199189f9081-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f5fa7c00-c1b0-4cb2-8209-3199189f9081" (UID: "f5fa7c00-c1b0-4cb2-8209-3199189f9081"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 13:46:24.477338 kubelet[2514]: I0130 13:46:24.477292 2514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2e2fe91-782d-4dc4-b7ed-9b597c17d300-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d2e2fe91-782d-4dc4-b7ed-9b597c17d300" (UID: "d2e2fe91-782d-4dc4-b7ed-9b597c17d300"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 13:46:24.479795 kubelet[2514]: I0130 13:46:24.479765 2514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5fa7c00-c1b0-4cb2-8209-3199189f9081-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f5fa7c00-c1b0-4cb2-8209-3199189f9081" (UID: "f5fa7c00-c1b0-4cb2-8209-3199189f9081"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 13:46:24.572127 kubelet[2514]: I0130 13:46:24.572079 2514 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 30 13:46:24.572127 kubelet[2514]: I0130 13:46:24.572113 2514 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 30 13:46:24.572127 kubelet[2514]: I0130 13:46:24.572125 2514 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m8wd9\" (UniqueName: \"kubernetes.io/projected/d2e2fe91-782d-4dc4-b7ed-9b597c17d300-kube-api-access-m8wd9\") on node \"localhost\" DevicePath \"\"" Jan 30 13:46:24.572127 kubelet[2514]: I0130 13:46:24.572137 2514 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 30 13:46:24.572390 kubelet[2514]: I0130 13:46:24.572148 2514 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 30 13:46:24.572390 kubelet[2514]: I0130 13:46:24.572158 2514 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8p2hk\" (UniqueName: \"kubernetes.io/projected/f5fa7c00-c1b0-4cb2-8209-3199189f9081-kube-api-access-8p2hk\") on node \"localhost\" DevicePath \"\"" Jan 30 13:46:24.572390 kubelet[2514]: I0130 13:46:24.572167 2514 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f5fa7c00-c1b0-4cb2-8209-3199189f9081-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 30 13:46:24.572390 kubelet[2514]: I0130 13:46:24.572177 2514 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f5fa7c00-c1b0-4cb2-8209-3199189f9081-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 30 13:46:24.572390 kubelet[2514]: I0130 13:46:24.572187 2514 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 30 13:46:24.572390 kubelet[2514]: I0130 13:46:24.572195 2514 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 30 13:46:24.572390 kubelet[2514]: I0130 13:46:24.572204 2514 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 30 13:46:24.572390 kubelet[2514]: I0130 13:46:24.572215 2514 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2e2fe91-782d-4dc4-b7ed-9b597c17d300-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 30 13:46:24.572574 kubelet[2514]: I0130 13:46:24.572226 2514 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 30 13:46:24.572574 kubelet[2514]: I0130 13:46:24.572235 2514 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f5fa7c00-c1b0-4cb2-8209-3199189f9081-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 30 13:46:24.572574 kubelet[2514]: I0130 13:46:24.572245 2514 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 30 13:46:24.572574 kubelet[2514]: I0130 13:46:24.572254 2514 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f5fa7c00-c1b0-4cb2-8209-3199189f9081-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 30 13:46:25.115924 systemd[1]: Removed slice kubepods-burstable-podf5fa7c00_c1b0_4cb2_8209_3199189f9081.slice - libcontainer container kubepods-burstable-podf5fa7c00_c1b0_4cb2_8209_3199189f9081.slice. Jan 30 13:46:25.116039 systemd[1]: kubepods-burstable-podf5fa7c00_c1b0_4cb2_8209_3199189f9081.slice: Consumed 6.819s CPU time. Jan 30 13:46:25.117208 systemd[1]: Removed slice kubepods-besteffort-podd2e2fe91_782d_4dc4_b7ed_9b597c17d300.slice - libcontainer container kubepods-besteffort-podd2e2fe91_782d_4dc4_b7ed_9b597c17d300.slice. Jan 30 13:46:25.181399 kubelet[2514]: E0130 13:46:25.181361 2514 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:46:25.257448 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc41cd2efe03f28d4e5c2610c68c0f53a58c9b84e7aaa7762b6d95f24da13d08-rootfs.mount: Deactivated successfully. Jan 30 13:46:25.257603 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cc41cd2efe03f28d4e5c2610c68c0f53a58c9b84e7aaa7762b6d95f24da13d08-shm.mount: Deactivated successfully. Jan 30 13:46:25.257737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8ea1cb263101cb5d3bd439525284266462eda133def1f954d27ba3caa628dfb-rootfs.mount: Deactivated successfully. Jan 30 13:46:25.257842 systemd[1]: var-lib-kubelet-pods-f5fa7c00\x2dc1b0\x2d4cb2\x2d8209\x2d3199189f9081-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 13:46:25.257952 systemd[1]: var-lib-kubelet-pods-d2e2fe91\x2d782d\x2d4dc4\x2db7ed\x2d9b597c17d300-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm8wd9.mount: Deactivated successfully. Jan 30 13:46:25.258068 systemd[1]: var-lib-kubelet-pods-f5fa7c00\x2dc1b0\x2d4cb2\x2d8209\x2d3199189f9081-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 13:46:25.258179 systemd[1]: var-lib-kubelet-pods-f5fa7c00\x2dc1b0\x2d4cb2\x2d8209\x2d3199189f9081-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8p2hk.mount: Deactivated successfully. Jan 30 13:46:25.274901 kubelet[2514]: I0130 13:46:25.274855 2514 scope.go:117] "RemoveContainer" containerID="4265d98f0591cfaa4e6c0a46e20c375b91525753129beee0acf844d7a72c51ba" Jan 30 13:46:25.276752 containerd[1467]: time="2025-01-30T13:46:25.276709725Z" level=info msg="RemoveContainer for \"4265d98f0591cfaa4e6c0a46e20c375b91525753129beee0acf844d7a72c51ba\"" Jan 30 13:46:25.283274 containerd[1467]: time="2025-01-30T13:46:25.283229784Z" level=info msg="RemoveContainer for \"4265d98f0591cfaa4e6c0a46e20c375b91525753129beee0acf844d7a72c51ba\" returns successfully" Jan 30 13:46:25.283476 kubelet[2514]: I0130 13:46:25.283455 2514 scope.go:117] "RemoveContainer" containerID="28cf516ebfe47c2a1980adf74944dbce649a76ba6c16362c97023a397b75bc0e" Jan 30 13:46:25.284409 containerd[1467]: time="2025-01-30T13:46:25.284374519Z" level=info msg="RemoveContainer for \"28cf516ebfe47c2a1980adf74944dbce649a76ba6c16362c97023a397b75bc0e\"" Jan 30 13:46:25.287735 containerd[1467]: time="2025-01-30T13:46:25.287685641Z" level=info msg="RemoveContainer for \"28cf516ebfe47c2a1980adf74944dbce649a76ba6c16362c97023a397b75bc0e\" returns successfully" Jan 30 13:46:25.287868 kubelet[2514]: I0130 13:46:25.287843 2514 scope.go:117] "RemoveContainer" containerID="7cc233632d68044d77d6dabd27c53fd6fcb49d0bdce13f45549f408ad1fdbf5d" Jan 30 13:46:25.288774 containerd[1467]: time="2025-01-30T13:46:25.288714486Z" level=info msg="RemoveContainer for \"7cc233632d68044d77d6dabd27c53fd6fcb49d0bdce13f45549f408ad1fdbf5d\"" Jan 30 13:46:25.292230 containerd[1467]: time="2025-01-30T13:46:25.292174913Z" level=info msg="RemoveContainer for \"7cc233632d68044d77d6dabd27c53fd6fcb49d0bdce13f45549f408ad1fdbf5d\" returns successfully" Jan 30 13:46:25.292376 kubelet[2514]: I0130 13:46:25.292351 2514 scope.go:117] "RemoveContainer" containerID="99b3ef28274a5d5e57641a840c42a58ece5dfb81fc40afcce6926c147cc2b35d" Jan 30 13:46:25.298490 containerd[1467]: time="2025-01-30T13:46:25.298456777Z" level=info msg="RemoveContainer for \"99b3ef28274a5d5e57641a840c42a58ece5dfb81fc40afcce6926c147cc2b35d\"" Jan 30 13:46:25.318811 containerd[1467]: time="2025-01-30T13:46:25.318760695Z" level=info msg="RemoveContainer for \"99b3ef28274a5d5e57641a840c42a58ece5dfb81fc40afcce6926c147cc2b35d\" returns successfully" Jan 30 13:46:25.319213 kubelet[2514]: I0130 13:46:25.319181 2514 scope.go:117] "RemoveContainer" containerID="9460849ffcb415b6ef1d48efc9727239d049580c2f0ea02685daa5332d5d9837" Jan 30 13:46:25.320550 containerd[1467]: time="2025-01-30T13:46:25.320517609Z" level=info msg="RemoveContainer for \"9460849ffcb415b6ef1d48efc9727239d049580c2f0ea02685daa5332d5d9837\"" Jan 30 13:46:25.324522 containerd[1467]: time="2025-01-30T13:46:25.324496626Z" level=info msg="RemoveContainer for \"9460849ffcb415b6ef1d48efc9727239d049580c2f0ea02685daa5332d5d9837\" returns successfully" Jan 30 13:46:25.324670 kubelet[2514]: I0130 13:46:25.324653 2514 scope.go:117] "RemoveContainer" containerID="4265d98f0591cfaa4e6c0a46e20c375b91525753129beee0acf844d7a72c51ba" Jan 30 13:46:25.327828 containerd[1467]: time="2025-01-30T13:46:25.327786317Z" level=error msg="ContainerStatus for \"4265d98f0591cfaa4e6c0a46e20c375b91525753129beee0acf844d7a72c51ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4265d98f0591cfaa4e6c0a46e20c375b91525753129beee0acf844d7a72c51ba\": not found" Jan 30 13:46:25.336929 kubelet[2514]: E0130 13:46:25.336882 2514 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4265d98f0591cfaa4e6c0a46e20c375b91525753129beee0acf844d7a72c51ba\": not found" containerID="4265d98f0591cfaa4e6c0a46e20c375b91525753129beee0acf844d7a72c51ba" Jan 30 13:46:25.337005 kubelet[2514]: I0130 13:46:25.336924 2514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4265d98f0591cfaa4e6c0a46e20c375b91525753129beee0acf844d7a72c51ba"} err="failed to get container status \"4265d98f0591cfaa4e6c0a46e20c375b91525753129beee0acf844d7a72c51ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"4265d98f0591cfaa4e6c0a46e20c375b91525753129beee0acf844d7a72c51ba\": not found" Jan 30 13:46:25.337005 kubelet[2514]: I0130 13:46:25.336991 2514 scope.go:117] "RemoveContainer" containerID="28cf516ebfe47c2a1980adf74944dbce649a76ba6c16362c97023a397b75bc0e" Jan 30 13:46:25.337231 containerd[1467]: time="2025-01-30T13:46:25.337188680Z" level=error msg="ContainerStatus for \"28cf516ebfe47c2a1980adf74944dbce649a76ba6c16362c97023a397b75bc0e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"28cf516ebfe47c2a1980adf74944dbce649a76ba6c16362c97023a397b75bc0e\": not found" Jan 30 13:46:25.337328 kubelet[2514]: E0130 13:46:25.337310 2514 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"28cf516ebfe47c2a1980adf74944dbce649a76ba6c16362c97023a397b75bc0e\": not found" containerID="28cf516ebfe47c2a1980adf74944dbce649a76ba6c16362c97023a397b75bc0e" Jan 30 13:46:25.337361 kubelet[2514]: I0130 13:46:25.337328 2514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"28cf516ebfe47c2a1980adf74944dbce649a76ba6c16362c97023a397b75bc0e"} err="failed to get container status \"28cf516ebfe47c2a1980adf74944dbce649a76ba6c16362c97023a397b75bc0e\": rpc error: code = NotFound desc = an error occurred when try to find container \"28cf516ebfe47c2a1980adf74944dbce649a76ba6c16362c97023a397b75bc0e\": not found" Jan 30 13:46:25.337361 kubelet[2514]: I0130 13:46:25.337342 2514 scope.go:117] "RemoveContainer" containerID="7cc233632d68044d77d6dabd27c53fd6fcb49d0bdce13f45549f408ad1fdbf5d" Jan 30 13:46:25.337548 containerd[1467]: time="2025-01-30T13:46:25.337507409Z" level=error msg="ContainerStatus for \"7cc233632d68044d77d6dabd27c53fd6fcb49d0bdce13f45549f408ad1fdbf5d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7cc233632d68044d77d6dabd27c53fd6fcb49d0bdce13f45549f408ad1fdbf5d\": not found" Jan 30 13:46:25.337692 kubelet[2514]: E0130 13:46:25.337634 2514 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7cc233632d68044d77d6dabd27c53fd6fcb49d0bdce13f45549f408ad1fdbf5d\": not found" containerID="7cc233632d68044d77d6dabd27c53fd6fcb49d0bdce13f45549f408ad1fdbf5d" Jan 30 13:46:25.337692 kubelet[2514]: I0130 13:46:25.337652 2514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7cc233632d68044d77d6dabd27c53fd6fcb49d0bdce13f45549f408ad1fdbf5d"} err="failed to get container status \"7cc233632d68044d77d6dabd27c53fd6fcb49d0bdce13f45549f408ad1fdbf5d\": rpc error: code = NotFound desc = an error occurred when try to find container \"7cc233632d68044d77d6dabd27c53fd6fcb49d0bdce13f45549f408ad1fdbf5d\": not found" Jan 30 13:46:25.337692 kubelet[2514]: I0130 13:46:25.337663 2514 scope.go:117] "RemoveContainer" containerID="99b3ef28274a5d5e57641a840c42a58ece5dfb81fc40afcce6926c147cc2b35d" Jan 30 13:46:25.337832 containerd[1467]: time="2025-01-30T13:46:25.337795178Z" level=error msg="ContainerStatus for \"99b3ef28274a5d5e57641a840c42a58ece5dfb81fc40afcce6926c147cc2b35d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"99b3ef28274a5d5e57641a840c42a58ece5dfb81fc40afcce6926c147cc2b35d\": not found" Jan 30 13:46:25.337966 kubelet[2514]: E0130 13:46:25.337937 2514 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"99b3ef28274a5d5e57641a840c42a58ece5dfb81fc40afcce6926c147cc2b35d\": not found" containerID="99b3ef28274a5d5e57641a840c42a58ece5dfb81fc40afcce6926c147cc2b35d" Jan 30 13:46:25.338000 kubelet[2514]: I0130 13:46:25.337973 2514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"99b3ef28274a5d5e57641a840c42a58ece5dfb81fc40afcce6926c147cc2b35d"} err="failed to get container status \"99b3ef28274a5d5e57641a840c42a58ece5dfb81fc40afcce6926c147cc2b35d\": rpc error: code = NotFound desc = an error occurred when try to find container \"99b3ef28274a5d5e57641a840c42a58ece5dfb81fc40afcce6926c147cc2b35d\": not found" Jan 30 13:46:25.338028 kubelet[2514]: I0130 13:46:25.338000 2514 scope.go:117] "RemoveContainer" containerID="9460849ffcb415b6ef1d48efc9727239d049580c2f0ea02685daa5332d5d9837" Jan 30 13:46:25.338247 containerd[1467]: time="2025-01-30T13:46:25.338210912Z" level=error msg="ContainerStatus for \"9460849ffcb415b6ef1d48efc9727239d049580c2f0ea02685daa5332d5d9837\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9460849ffcb415b6ef1d48efc9727239d049580c2f0ea02685daa5332d5d9837\": not found" Jan 30 13:46:25.338351 kubelet[2514]: E0130 13:46:25.338320 2514 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9460849ffcb415b6ef1d48efc9727239d049580c2f0ea02685daa5332d5d9837\": not found" containerID="9460849ffcb415b6ef1d48efc9727239d049580c2f0ea02685daa5332d5d9837" Jan 30 13:46:25.338351 kubelet[2514]: I0130 13:46:25.338341 2514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9460849ffcb415b6ef1d48efc9727239d049580c2f0ea02685daa5332d5d9837"} err="failed to get container status \"9460849ffcb415b6ef1d48efc9727239d049580c2f0ea02685daa5332d5d9837\": rpc error: code = NotFound desc = an error occurred when try to find container \"9460849ffcb415b6ef1d48efc9727239d049580c2f0ea02685daa5332d5d9837\": not found" Jan 30 13:46:25.338351 kubelet[2514]: I0130 13:46:25.338355 2514 scope.go:117] "RemoveContainer" containerID="f021f4fe0540263c6bc3750da52eebe6f74c9d5f7652023f8a786bf311055960" Jan 30 13:46:25.339272 containerd[1467]: time="2025-01-30T13:46:25.339245718Z" level=info msg="RemoveContainer for \"f021f4fe0540263c6bc3750da52eebe6f74c9d5f7652023f8a786bf311055960\"" Jan 30 13:46:25.342670 containerd[1467]: time="2025-01-30T13:46:25.342616544Z" level=info msg="RemoveContainer for \"f021f4fe0540263c6bc3750da52eebe6f74c9d5f7652023f8a786bf311055960\" returns successfully" Jan 30 13:46:25.342795 kubelet[2514]: I0130 13:46:25.342765 2514 scope.go:117] "RemoveContainer" containerID="f021f4fe0540263c6bc3750da52eebe6f74c9d5f7652023f8a786bf311055960" Jan 30 13:46:25.342912 containerd[1467]: time="2025-01-30T13:46:25.342882792Z" level=error msg="ContainerStatus for \"f021f4fe0540263c6bc3750da52eebe6f74c9d5f7652023f8a786bf311055960\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f021f4fe0540263c6bc3750da52eebe6f74c9d5f7652023f8a786bf311055960\": not found" Jan 30 13:46:25.342994 kubelet[2514]: E0130 13:46:25.342973 2514 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f021f4fe0540263c6bc3750da52eebe6f74c9d5f7652023f8a786bf311055960\": not found" containerID="f021f4fe0540263c6bc3750da52eebe6f74c9d5f7652023f8a786bf311055960" Jan 30 13:46:25.343031 kubelet[2514]: I0130 13:46:25.342994 2514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f021f4fe0540263c6bc3750da52eebe6f74c9d5f7652023f8a786bf311055960"} err="failed to get container status \"f021f4fe0540263c6bc3750da52eebe6f74c9d5f7652023f8a786bf311055960\": rpc error: code = NotFound desc = an error occurred when try to find container \"f021f4fe0540263c6bc3750da52eebe6f74c9d5f7652023f8a786bf311055960\": not found" Jan 30 13:46:26.108220 kubelet[2514]: E0130 13:46:26.108172 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:26.195974 sshd[4168]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:26.205773 systemd[1]: sshd@25-10.0.0.97:22-10.0.0.1:57846.service: Deactivated successfully. Jan 30 13:46:26.207748 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 13:46:26.209323 systemd-logind[1454]: Session 26 logged out. Waiting for processes to exit. Jan 30 13:46:26.223932 systemd[1]: Started sshd@26-10.0.0.97:22-10.0.0.1:57854.service - OpenSSH per-connection server daemon (10.0.0.1:57854). Jan 30 13:46:26.224923 systemd-logind[1454]: Removed session 26. Jan 30 13:46:26.263021 sshd[4331]: Accepted publickey for core from 10.0.0.1 port 57854 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:26.264693 sshd[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:26.269042 systemd-logind[1454]: New session 27 of user core. Jan 30 13:46:26.277791 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 13:46:26.743553 sshd[4331]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:26.751557 systemd[1]: sshd@26-10.0.0.97:22-10.0.0.1:57854.service: Deactivated successfully. Jan 30 13:46:26.754522 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 13:46:26.756811 systemd-logind[1454]: Session 27 logged out. Waiting for processes to exit. Jan 30 13:46:26.763459 kubelet[2514]: I0130 13:46:26.762567 2514 memory_manager.go:355] "RemoveStaleState removing state" podUID="d2e2fe91-782d-4dc4-b7ed-9b597c17d300" containerName="cilium-operator" Jan 30 13:46:26.763459 kubelet[2514]: I0130 13:46:26.762601 2514 memory_manager.go:355] "RemoveStaleState removing state" podUID="f5fa7c00-c1b0-4cb2-8209-3199189f9081" containerName="cilium-agent" Jan 30 13:46:26.764994 systemd[1]: Started sshd@27-10.0.0.97:22-10.0.0.1:57856.service - OpenSSH per-connection server daemon (10.0.0.1:57856). Jan 30 13:46:26.771349 systemd-logind[1454]: Removed session 27. Jan 30 13:46:26.786391 systemd[1]: Created slice kubepods-burstable-pod449e18dd_b2ad_455a_908f_85b1d3c12441.slice - libcontainer container kubepods-burstable-pod449e18dd_b2ad_455a_908f_85b1d3c12441.slice. Jan 30 13:46:26.806495 sshd[4344]: Accepted publickey for core from 10.0.0.1 port 57856 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:26.808170 sshd[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:26.812371 systemd-logind[1454]: New session 28 of user core. Jan 30 13:46:26.822769 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 13:46:26.873757 sshd[4344]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:26.883715 kubelet[2514]: I0130 13:46:26.883679 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/449e18dd-b2ad-455a-908f-85b1d3c12441-host-proc-sys-kernel\") pod \"cilium-dpvkn\" (UID: \"449e18dd-b2ad-455a-908f-85b1d3c12441\") " pod="kube-system/cilium-dpvkn" Jan 30 13:46:26.883835 kubelet[2514]: I0130 13:46:26.883723 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/449e18dd-b2ad-455a-908f-85b1d3c12441-hostproc\") pod \"cilium-dpvkn\" (UID: \"449e18dd-b2ad-455a-908f-85b1d3c12441\") " pod="kube-system/cilium-dpvkn" Jan 30 13:46:26.883835 kubelet[2514]: I0130 13:46:26.883752 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/449e18dd-b2ad-455a-908f-85b1d3c12441-clustermesh-secrets\") pod \"cilium-dpvkn\" (UID: \"449e18dd-b2ad-455a-908f-85b1d3c12441\") " pod="kube-system/cilium-dpvkn" Jan 30 13:46:26.883835 kubelet[2514]: I0130 13:46:26.883767 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/449e18dd-b2ad-455a-908f-85b1d3c12441-cilium-run\") pod \"cilium-dpvkn\" (UID: \"449e18dd-b2ad-455a-908f-85b1d3c12441\") " pod="kube-system/cilium-dpvkn" Jan 30 13:46:26.883835 kubelet[2514]: I0130 13:46:26.883782 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/449e18dd-b2ad-455a-908f-85b1d3c12441-etc-cni-netd\") pod \"cilium-dpvkn\" (UID: \"449e18dd-b2ad-455a-908f-85b1d3c12441\") " pod="kube-system/cilium-dpvkn" Jan 30 13:46:26.883835 kubelet[2514]: I0130 13:46:26.883799 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/449e18dd-b2ad-455a-908f-85b1d3c12441-host-proc-sys-net\") pod \"cilium-dpvkn\" (UID: \"449e18dd-b2ad-455a-908f-85b1d3c12441\") " pod="kube-system/cilium-dpvkn" Jan 30 13:46:26.883835 kubelet[2514]: I0130 13:46:26.883818 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/449e18dd-b2ad-455a-908f-85b1d3c12441-bpf-maps\") pod \"cilium-dpvkn\" (UID: \"449e18dd-b2ad-455a-908f-85b1d3c12441\") " pod="kube-system/cilium-dpvkn" Jan 30 13:46:26.884015 kubelet[2514]: I0130 13:46:26.883833 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/449e18dd-b2ad-455a-908f-85b1d3c12441-lib-modules\") pod \"cilium-dpvkn\" (UID: \"449e18dd-b2ad-455a-908f-85b1d3c12441\") " pod="kube-system/cilium-dpvkn" Jan 30 13:46:26.884015 kubelet[2514]: I0130 13:46:26.883846 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/449e18dd-b2ad-455a-908f-85b1d3c12441-xtables-lock\") pod \"cilium-dpvkn\" (UID: \"449e18dd-b2ad-455a-908f-85b1d3c12441\") " pod="kube-system/cilium-dpvkn" Jan 30 13:46:26.884015 kubelet[2514]: I0130 13:46:26.883859 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/449e18dd-b2ad-455a-908f-85b1d3c12441-cilium-ipsec-secrets\") pod \"cilium-dpvkn\" (UID: \"449e18dd-b2ad-455a-908f-85b1d3c12441\") " pod="kube-system/cilium-dpvkn" Jan 30 13:46:26.884015 kubelet[2514]: I0130 13:46:26.883883 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgvf6\" (UniqueName: \"kubernetes.io/projected/449e18dd-b2ad-455a-908f-85b1d3c12441-kube-api-access-pgvf6\") pod \"cilium-dpvkn\" (UID: \"449e18dd-b2ad-455a-908f-85b1d3c12441\") " pod="kube-system/cilium-dpvkn" Jan 30 13:46:26.884015 kubelet[2514]: I0130 13:46:26.883941 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/449e18dd-b2ad-455a-908f-85b1d3c12441-cilium-cgroup\") pod \"cilium-dpvkn\" (UID: \"449e18dd-b2ad-455a-908f-85b1d3c12441\") " pod="kube-system/cilium-dpvkn" Jan 30 13:46:26.884151 kubelet[2514]: I0130 13:46:26.883980 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/449e18dd-b2ad-455a-908f-85b1d3c12441-cilium-config-path\") pod \"cilium-dpvkn\" (UID: \"449e18dd-b2ad-455a-908f-85b1d3c12441\") " pod="kube-system/cilium-dpvkn" Jan 30 13:46:26.884151 kubelet[2514]: I0130 13:46:26.884000 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/449e18dd-b2ad-455a-908f-85b1d3c12441-hubble-tls\") pod \"cilium-dpvkn\" (UID: \"449e18dd-b2ad-455a-908f-85b1d3c12441\") " pod="kube-system/cilium-dpvkn" Jan 30 13:46:26.884151 kubelet[2514]: I0130 13:46:26.884014 2514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/449e18dd-b2ad-455a-908f-85b1d3c12441-cni-path\") pod \"cilium-dpvkn\" (UID: \"449e18dd-b2ad-455a-908f-85b1d3c12441\") " pod="kube-system/cilium-dpvkn" Jan 30 13:46:26.884748 systemd[1]: sshd@27-10.0.0.97:22-10.0.0.1:57856.service: Deactivated successfully. Jan 30 13:46:26.886552 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 13:46:26.888040 systemd-logind[1454]: Session 28 logged out. Waiting for processes to exit. Jan 30 13:46:26.900903 systemd[1]: Started sshd@28-10.0.0.97:22-10.0.0.1:57860.service - OpenSSH per-connection server daemon (10.0.0.1:57860). Jan 30 13:46:26.901786 systemd-logind[1454]: Removed session 28. Jan 30 13:46:26.935195 sshd[4352]: Accepted publickey for core from 10.0.0.1 port 57860 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:26.936825 sshd[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:26.940744 systemd-logind[1454]: New session 29 of user core. Jan 30 13:46:26.950768 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 30 13:46:27.089474 kubelet[2514]: E0130 13:46:27.089414 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:27.090156 containerd[1467]: time="2025-01-30T13:46:27.090065368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dpvkn,Uid:449e18dd-b2ad-455a-908f-85b1d3c12441,Namespace:kube-system,Attempt:0,}" Jan 30 13:46:27.111360 kubelet[2514]: I0130 13:46:27.110113 2514 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2e2fe91-782d-4dc4-b7ed-9b597c17d300" path="/var/lib/kubelet/pods/d2e2fe91-782d-4dc4-b7ed-9b597c17d300/volumes" Jan 30 13:46:27.111360 kubelet[2514]: I0130 13:46:27.110809 2514 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5fa7c00-c1b0-4cb2-8209-3199189f9081" path="/var/lib/kubelet/pods/f5fa7c00-c1b0-4cb2-8209-3199189f9081/volumes" Jan 30 13:46:27.113193 containerd[1467]: time="2025-01-30T13:46:27.112329560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:46:27.113193 containerd[1467]: time="2025-01-30T13:46:27.113173479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:46:27.113300 containerd[1467]: time="2025-01-30T13:46:27.113192396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:27.113420 containerd[1467]: time="2025-01-30T13:46:27.113311704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:27.138769 systemd[1]: Started cri-containerd-3b13abc1880387407d4da2b553d0614cf58bf6b064ffd375196d7e2209bb914a.scope - libcontainer container 3b13abc1880387407d4da2b553d0614cf58bf6b064ffd375196d7e2209bb914a. Jan 30 13:46:27.161446 containerd[1467]: time="2025-01-30T13:46:27.161399475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dpvkn,Uid:449e18dd-b2ad-455a-908f-85b1d3c12441,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b13abc1880387407d4da2b553d0614cf58bf6b064ffd375196d7e2209bb914a\"" Jan 30 13:46:27.162417 kubelet[2514]: E0130 13:46:27.162397 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:27.164560 containerd[1467]: time="2025-01-30T13:46:27.164515970Z" level=info msg="CreateContainer within sandbox \"3b13abc1880387407d4da2b553d0614cf58bf6b064ffd375196d7e2209bb914a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:46:27.277770 kubelet[2514]: I0130 13:46:27.277698 2514 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T13:46:27Z","lastTransitionTime":"2025-01-30T13:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 13:46:27.404611 containerd[1467]: time="2025-01-30T13:46:27.404472352Z" level=info msg="CreateContainer within sandbox \"3b13abc1880387407d4da2b553d0614cf58bf6b064ffd375196d7e2209bb914a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a88865a5fbef6413aa5b5337ef7e0d9a61ee743826db17519a8d542ff4805585\"" Jan 30 13:46:27.405010 containerd[1467]: time="2025-01-30T13:46:27.404983456Z" level=info msg="StartContainer for \"a88865a5fbef6413aa5b5337ef7e0d9a61ee743826db17519a8d542ff4805585\"" Jan 30 13:46:27.430789 systemd[1]: Started cri-containerd-a88865a5fbef6413aa5b5337ef7e0d9a61ee743826db17519a8d542ff4805585.scope - libcontainer container a88865a5fbef6413aa5b5337ef7e0d9a61ee743826db17519a8d542ff4805585. Jan 30 13:46:27.464415 systemd[1]: cri-containerd-a88865a5fbef6413aa5b5337ef7e0d9a61ee743826db17519a8d542ff4805585.scope: Deactivated successfully. Jan 30 13:46:27.529918 containerd[1467]: time="2025-01-30T13:46:27.529879739Z" level=info msg="StartContainer for \"a88865a5fbef6413aa5b5337ef7e0d9a61ee743826db17519a8d542ff4805585\" returns successfully" Jan 30 13:46:27.695905 containerd[1467]: time="2025-01-30T13:46:27.695754112Z" level=info msg="shim disconnected" id=a88865a5fbef6413aa5b5337ef7e0d9a61ee743826db17519a8d542ff4805585 namespace=k8s.io Jan 30 13:46:27.695905 containerd[1467]: time="2025-01-30T13:46:27.695818525Z" level=warning msg="cleaning up after shim disconnected" id=a88865a5fbef6413aa5b5337ef7e0d9a61ee743826db17519a8d542ff4805585 namespace=k8s.io Jan 30 13:46:27.695905 containerd[1467]: time="2025-01-30T13:46:27.695826750Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:46:28.287431 kubelet[2514]: E0130 13:46:28.287400 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:28.289495 containerd[1467]: time="2025-01-30T13:46:28.289446371Z" level=info msg="CreateContainer within sandbox \"3b13abc1880387407d4da2b553d0614cf58bf6b064ffd375196d7e2209bb914a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:46:28.363278 containerd[1467]: time="2025-01-30T13:46:28.363220932Z" level=info msg="CreateContainer within sandbox \"3b13abc1880387407d4da2b553d0614cf58bf6b064ffd375196d7e2209bb914a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c8f250fec89d017b204a298fcdb88441c3e9a3b7f2340a88dcd350eab8132246\"" Jan 30 13:46:28.363933 containerd[1467]: time="2025-01-30T13:46:28.363849551Z" level=info msg="StartContainer for \"c8f250fec89d017b204a298fcdb88441c3e9a3b7f2340a88dcd350eab8132246\"" Jan 30 13:46:28.402779 systemd[1]: Started cri-containerd-c8f250fec89d017b204a298fcdb88441c3e9a3b7f2340a88dcd350eab8132246.scope - libcontainer container c8f250fec89d017b204a298fcdb88441c3e9a3b7f2340a88dcd350eab8132246. Jan 30 13:46:28.427086 containerd[1467]: time="2025-01-30T13:46:28.427046533Z" level=info msg="StartContainer for \"c8f250fec89d017b204a298fcdb88441c3e9a3b7f2340a88dcd350eab8132246\" returns successfully" Jan 30 13:46:28.432400 systemd[1]: cri-containerd-c8f250fec89d017b204a298fcdb88441c3e9a3b7f2340a88dcd350eab8132246.scope: Deactivated successfully. Jan 30 13:46:28.461313 containerd[1467]: time="2025-01-30T13:46:28.461236996Z" level=info msg="shim disconnected" id=c8f250fec89d017b204a298fcdb88441c3e9a3b7f2340a88dcd350eab8132246 namespace=k8s.io Jan 30 13:46:28.461313 containerd[1467]: time="2025-01-30T13:46:28.461295718Z" level=warning msg="cleaning up after shim disconnected" id=c8f250fec89d017b204a298fcdb88441c3e9a3b7f2340a88dcd350eab8132246 namespace=k8s.io Jan 30 13:46:28.461313 containerd[1467]: time="2025-01-30T13:46:28.461306969Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:46:28.989738 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8f250fec89d017b204a298fcdb88441c3e9a3b7f2340a88dcd350eab8132246-rootfs.mount: Deactivated successfully. Jan 30 13:46:29.107610 kubelet[2514]: E0130 13:46:29.107572 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:29.290797 kubelet[2514]: E0130 13:46:29.290747 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:29.293246 containerd[1467]: time="2025-01-30T13:46:29.293190204Z" level=info msg="CreateContainer within sandbox \"3b13abc1880387407d4da2b553d0614cf58bf6b064ffd375196d7e2209bb914a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:46:29.332221 containerd[1467]: time="2025-01-30T13:46:29.332163225Z" level=info msg="CreateContainer within sandbox \"3b13abc1880387407d4da2b553d0614cf58bf6b064ffd375196d7e2209bb914a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4e8fabee58b558a046a8c31d801d1c3ca1b4110f5f92d8c4892d38044a89aefc\"" Jan 30 13:46:29.332709 containerd[1467]: time="2025-01-30T13:46:29.332667066Z" level=info msg="StartContainer for \"4e8fabee58b558a046a8c31d801d1c3ca1b4110f5f92d8c4892d38044a89aefc\"" Jan 30 13:46:29.361749 systemd[1]: Started cri-containerd-4e8fabee58b558a046a8c31d801d1c3ca1b4110f5f92d8c4892d38044a89aefc.scope - libcontainer container 4e8fabee58b558a046a8c31d801d1c3ca1b4110f5f92d8c4892d38044a89aefc. Jan 30 13:46:29.389399 systemd[1]: cri-containerd-4e8fabee58b558a046a8c31d801d1c3ca1b4110f5f92d8c4892d38044a89aefc.scope: Deactivated successfully. Jan 30 13:46:29.399319 containerd[1467]: time="2025-01-30T13:46:29.399274196Z" level=info msg="StartContainer for \"4e8fabee58b558a046a8c31d801d1c3ca1b4110f5f92d8c4892d38044a89aefc\" returns successfully" Jan 30 13:46:29.422572 containerd[1467]: time="2025-01-30T13:46:29.422498711Z" level=info msg="shim disconnected" id=4e8fabee58b558a046a8c31d801d1c3ca1b4110f5f92d8c4892d38044a89aefc namespace=k8s.io Jan 30 13:46:29.422572 containerd[1467]: time="2025-01-30T13:46:29.422556491Z" level=warning msg="cleaning up after shim disconnected" id=4e8fabee58b558a046a8c31d801d1c3ca1b4110f5f92d8c4892d38044a89aefc namespace=k8s.io Jan 30 13:46:29.422572 containerd[1467]: time="2025-01-30T13:46:29.422567383Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:46:29.990056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e8fabee58b558a046a8c31d801d1c3ca1b4110f5f92d8c4892d38044a89aefc-rootfs.mount: Deactivated successfully. Jan 30 13:46:30.182993 kubelet[2514]: E0130 13:46:30.182951 2514 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:46:30.294301 kubelet[2514]: E0130 13:46:30.294268 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:30.296387 containerd[1467]: time="2025-01-30T13:46:30.296356669Z" level=info msg="CreateContainer within sandbox \"3b13abc1880387407d4da2b553d0614cf58bf6b064ffd375196d7e2209bb914a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:46:30.317323 containerd[1467]: time="2025-01-30T13:46:30.317273059Z" level=info msg="CreateContainer within sandbox \"3b13abc1880387407d4da2b553d0614cf58bf6b064ffd375196d7e2209bb914a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ea75d663fe6054236b6d1be0092b5a64a0d7499bc9fc23297f7f3a46d820fdf7\"" Jan 30 13:46:30.317839 containerd[1467]: time="2025-01-30T13:46:30.317774324Z" level=info msg="StartContainer for \"ea75d663fe6054236b6d1be0092b5a64a0d7499bc9fc23297f7f3a46d820fdf7\"" Jan 30 13:46:30.345841 systemd[1]: Started cri-containerd-ea75d663fe6054236b6d1be0092b5a64a0d7499bc9fc23297f7f3a46d820fdf7.scope - libcontainer container ea75d663fe6054236b6d1be0092b5a64a0d7499bc9fc23297f7f3a46d820fdf7. Jan 30 13:46:30.368609 systemd[1]: cri-containerd-ea75d663fe6054236b6d1be0092b5a64a0d7499bc9fc23297f7f3a46d820fdf7.scope: Deactivated successfully. Jan 30 13:46:30.388304 containerd[1467]: time="2025-01-30T13:46:30.388246627Z" level=info msg="StartContainer for \"ea75d663fe6054236b6d1be0092b5a64a0d7499bc9fc23297f7f3a46d820fdf7\" returns successfully" Jan 30 13:46:30.412838 containerd[1467]: time="2025-01-30T13:46:30.412769076Z" level=info msg="shim disconnected" id=ea75d663fe6054236b6d1be0092b5a64a0d7499bc9fc23297f7f3a46d820fdf7 namespace=k8s.io Jan 30 13:46:30.412838 containerd[1467]: time="2025-01-30T13:46:30.412819441Z" level=warning msg="cleaning up after shim disconnected" id=ea75d663fe6054236b6d1be0092b5a64a0d7499bc9fc23297f7f3a46d820fdf7 namespace=k8s.io Jan 30 13:46:30.412838 containerd[1467]: time="2025-01-30T13:46:30.412829621Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:46:30.990284 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea75d663fe6054236b6d1be0092b5a64a0d7499bc9fc23297f7f3a46d820fdf7-rootfs.mount: Deactivated successfully. Jan 30 13:46:31.298800 kubelet[2514]: E0130 13:46:31.298758 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:31.300645 containerd[1467]: time="2025-01-30T13:46:31.300580779Z" level=info msg="CreateContainer within sandbox \"3b13abc1880387407d4da2b553d0614cf58bf6b064ffd375196d7e2209bb914a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:46:31.316446 containerd[1467]: time="2025-01-30T13:46:31.316402079Z" level=info msg="CreateContainer within sandbox \"3b13abc1880387407d4da2b553d0614cf58bf6b064ffd375196d7e2209bb914a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"38e2d26d8a64a702aa7b9912aa91400bfd75aee97ab8157b32d5a3c837668c7f\"" Jan 30 13:46:31.317614 containerd[1467]: time="2025-01-30T13:46:31.317569072Z" level=info msg="StartContainer for \"38e2d26d8a64a702aa7b9912aa91400bfd75aee97ab8157b32d5a3c837668c7f\"" Jan 30 13:46:31.348834 systemd[1]: Started cri-containerd-38e2d26d8a64a702aa7b9912aa91400bfd75aee97ab8157b32d5a3c837668c7f.scope - libcontainer container 38e2d26d8a64a702aa7b9912aa91400bfd75aee97ab8157b32d5a3c837668c7f. Jan 30 13:46:31.382125 containerd[1467]: time="2025-01-30T13:46:31.382066331Z" level=info msg="StartContainer for \"38e2d26d8a64a702aa7b9912aa91400bfd75aee97ab8157b32d5a3c837668c7f\" returns successfully" Jan 30 13:46:31.814662 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 30 13:46:32.302938 kubelet[2514]: E0130 13:46:32.302910 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:33.108136 kubelet[2514]: E0130 13:46:33.108098 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:33.299695 systemd[1]: run-containerd-runc-k8s.io-38e2d26d8a64a702aa7b9912aa91400bfd75aee97ab8157b32d5a3c837668c7f-runc.rU3SqZ.mount: Deactivated successfully. Jan 30 13:46:33.304893 kubelet[2514]: E0130 13:46:33.304853 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:34.306406 kubelet[2514]: E0130 13:46:34.306375 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:34.869572 systemd-networkd[1398]: lxc_health: Link UP Jan 30 13:46:34.880457 systemd-networkd[1398]: lxc_health: Gained carrier Jan 30 13:46:35.106925 kubelet[2514]: I0130 13:46:35.106869 2514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dpvkn" podStartSLOduration=9.106854573 podStartE2EDuration="9.106854573s" podCreationTimestamp="2025-01-30 13:46:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:46:32.316176758 +0000 UTC m=+87.281575108" watchObservedRunningTime="2025-01-30 13:46:35.106854573 +0000 UTC m=+90.072252923" Jan 30 13:46:35.308391 kubelet[2514]: E0130 13:46:35.308349 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:36.310245 kubelet[2514]: E0130 13:46:36.310204 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:36.746979 systemd-networkd[1398]: lxc_health: Gained IPv6LL Jan 30 13:46:37.311294 kubelet[2514]: E0130 13:46:37.311263 2514 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:41.747970 sshd[4352]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:41.752494 systemd[1]: sshd@28-10.0.0.97:22-10.0.0.1:57860.service: Deactivated successfully. Jan 30 13:46:41.754590 systemd[1]: session-29.scope: Deactivated successfully. Jan 30 13:46:41.755275 systemd-logind[1454]: Session 29 logged out. Waiting for processes to exit. Jan 30 13:46:41.756246 systemd-logind[1454]: Removed session 29.