Apr 30 03:23:33.965621 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:23:33.965676 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:23:33.965689 kernel: BIOS-provided physical RAM map: Apr 30 03:23:33.965695 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 30 03:23:33.965702 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 30 03:23:33.965708 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 30 03:23:33.965716 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 30 03:23:33.965722 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 30 03:23:33.965728 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 30 03:23:33.965735 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 30 03:23:33.965744 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 30 03:23:33.965750 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 30 03:23:33.965759 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 30 03:23:33.965766 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 30 03:23:33.965777 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 30 03:23:33.965784 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 30 03:23:33.965793 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 30 03:23:33.965800 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 30 03:23:33.965807 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 30 03:23:33.965814 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 30 03:23:33.965820 kernel: NX (Execute Disable) protection: active Apr 30 03:23:33.965827 kernel: APIC: Static calls initialized Apr 30 03:23:33.965834 kernel: efi: EFI v2.7 by EDK II Apr 30 03:23:33.965841 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Apr 30 03:23:33.965847 kernel: SMBIOS 2.8 present. Apr 30 03:23:33.965854 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 30 03:23:33.965861 kernel: Hypervisor detected: KVM Apr 30 03:23:33.965870 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 03:23:33.965877 kernel: kvm-clock: using sched offset of 5108409496 cycles Apr 30 03:23:33.965885 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 03:23:33.965892 kernel: tsc: Detected 2794.748 MHz processor Apr 30 03:23:33.965899 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:23:33.965906 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:23:33.965928 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Apr 30 03:23:33.965935 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 30 03:23:33.965942 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:23:33.965952 kernel: Using GB pages for direct mapping Apr 30 03:23:33.965959 kernel: Secure boot disabled Apr 30 03:23:33.965966 kernel: ACPI: Early table checksum verification disabled Apr 30 03:23:33.965973 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 30 03:23:33.965984 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 30 03:23:33.965991 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:23:33.965999 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:23:33.966009 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 30 03:23:33.966016 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:23:33.966026 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:23:33.966034 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:23:33.966041 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:23:33.966048 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 30 03:23:33.966055 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 30 03:23:33.966065 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Apr 30 03:23:33.966073 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 30 03:23:33.966080 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 30 03:23:33.966087 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 30 03:23:33.966094 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 30 03:23:33.966101 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 30 03:23:33.966108 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 30 03:23:33.966115 kernel: No NUMA configuration found Apr 30 03:23:33.966124 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 30 03:23:33.966134 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 30 03:23:33.966142 kernel: Zone ranges: Apr 30 03:23:33.966149 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:23:33.966156 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 30 03:23:33.966163 kernel: Normal empty Apr 30 03:23:33.966171 kernel: Movable zone start for each node Apr 30 03:23:33.966178 kernel: Early memory node ranges Apr 30 03:23:33.966185 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 30 03:23:33.966192 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 30 03:23:33.966199 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 30 03:23:33.966209 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 30 03:23:33.966216 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 30 03:23:33.966223 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 30 03:23:33.966232 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 30 03:23:33.966240 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:23:33.966247 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 30 03:23:33.966254 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 30 03:23:33.966261 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:23:33.966269 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 30 03:23:33.966279 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 30 03:23:33.966286 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 30 03:23:33.966293 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 30 03:23:33.966301 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 03:23:33.966308 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 30 03:23:33.966315 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 03:23:33.966322 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 03:23:33.966330 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:23:33.966337 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 03:23:33.966347 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 03:23:33.966354 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:23:33.966361 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 03:23:33.966369 kernel: TSC deadline timer available Apr 30 03:23:33.966376 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 30 03:23:33.966383 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 03:23:33.966390 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 30 03:23:33.966397 kernel: kvm-guest: setup PV sched yield Apr 30 03:23:33.966404 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 30 03:23:33.966415 kernel: Booting paravirtualized kernel on KVM Apr 30 03:23:33.966422 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:23:33.966429 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 30 03:23:33.966437 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Apr 30 03:23:33.966444 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Apr 30 03:23:33.966451 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 30 03:23:33.966458 kernel: kvm-guest: PV spinlocks enabled Apr 30 03:23:33.966465 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 03:23:33.966473 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:23:33.966486 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:23:33.966494 kernel: random: crng init done Apr 30 03:23:33.966501 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 03:23:33.966508 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 03:23:33.966515 kernel: Fallback order for Node 0: 0 Apr 30 03:23:33.966522 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 30 03:23:33.966530 kernel: Policy zone: DMA32 Apr 30 03:23:33.966537 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:23:33.966547 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 171124K reserved, 0K cma-reserved) Apr 30 03:23:33.966554 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 30 03:23:33.966562 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:23:33.966569 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:23:33.966576 kernel: Dynamic Preempt: voluntary Apr 30 03:23:33.966592 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:23:33.966603 kernel: rcu: RCU event tracing is enabled. Apr 30 03:23:33.966611 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 30 03:23:33.966618 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:23:33.966626 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:23:33.966634 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:23:33.966642 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:23:33.966652 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 30 03:23:33.966666 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 30 03:23:33.966676 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:23:33.966684 kernel: Console: colour dummy device 80x25 Apr 30 03:23:33.966692 kernel: printk: console [ttyS0] enabled Apr 30 03:23:33.966702 kernel: ACPI: Core revision 20230628 Apr 30 03:23:33.966711 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 30 03:23:33.966719 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:23:33.966726 kernel: x2apic enabled Apr 30 03:23:33.966734 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 03:23:33.966742 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 30 03:23:33.966750 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 30 03:23:33.966757 kernel: kvm-guest: setup PV IPIs Apr 30 03:23:33.966765 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 30 03:23:33.966776 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 30 03:23:33.966783 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Apr 30 03:23:33.966791 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 30 03:23:33.966799 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 30 03:23:33.966807 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 30 03:23:33.966815 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:23:33.966823 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 03:23:33.966831 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:23:33.966839 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:23:33.966849 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Apr 30 03:23:33.966861 kernel: RETBleed: Mitigation: untrained return thunk Apr 30 03:23:33.966875 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 03:23:33.966893 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 03:23:33.966909 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 30 03:23:33.966939 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 30 03:23:33.966950 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 30 03:23:33.966968 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:23:33.966988 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:23:33.967002 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:23:33.967016 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:23:33.967031 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Apr 30 03:23:33.967049 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:23:33.967057 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:23:33.967065 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:23:33.967073 kernel: landlock: Up and running. Apr 30 03:23:33.967080 kernel: SELinux: Initializing. Apr 30 03:23:33.967091 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 03:23:33.967099 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 03:23:33.967107 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Apr 30 03:23:33.967115 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 03:23:33.967123 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 03:23:33.967131 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 03:23:33.967138 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 30 03:23:33.967146 kernel: ... version: 0 Apr 30 03:23:33.967154 kernel: ... bit width: 48 Apr 30 03:23:33.967164 kernel: ... generic registers: 6 Apr 30 03:23:33.967172 kernel: ... value mask: 0000ffffffffffff Apr 30 03:23:33.967180 kernel: ... max period: 00007fffffffffff Apr 30 03:23:33.967187 kernel: ... fixed-purpose events: 0 Apr 30 03:23:33.967195 kernel: ... event mask: 000000000000003f Apr 30 03:23:33.967203 kernel: signal: max sigframe size: 1776 Apr 30 03:23:33.967210 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:23:33.967218 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:23:33.967226 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:23:33.967238 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:23:33.967247 kernel: .... node #0, CPUs: #1 #2 #3 Apr 30 03:23:33.967256 kernel: smp: Brought up 1 node, 4 CPUs Apr 30 03:23:33.967266 kernel: smpboot: Max logical packages: 1 Apr 30 03:23:33.967275 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Apr 30 03:23:33.967284 kernel: devtmpfs: initialized Apr 30 03:23:33.967294 kernel: x86/mm: Memory block size: 128MB Apr 30 03:23:33.967304 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 30 03:23:33.967326 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 30 03:23:33.967338 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 30 03:23:33.967346 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 30 03:23:33.967354 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 30 03:23:33.967362 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:23:33.967369 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 30 03:23:33.967377 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:23:33.967386 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:23:33.967396 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:23:33.967406 kernel: audit: type=2000 audit(1745983412.608:1): state=initialized audit_enabled=0 res=1 Apr 30 03:23:33.967419 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:23:33.967430 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:23:33.967438 kernel: cpuidle: using governor menu Apr 30 03:23:33.967445 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:23:33.967454 kernel: dca service started, version 1.12.1 Apr 30 03:23:33.967465 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 30 03:23:33.967476 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 30 03:23:33.967484 kernel: PCI: Using configuration type 1 for base access Apr 30 03:23:33.967492 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:23:33.967504 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 03:23:33.967514 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 03:23:33.967525 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:23:33.967533 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:23:33.967541 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:23:33.967549 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:23:33.967557 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:23:33.967565 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:23:33.967572 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 03:23:33.967583 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:23:33.967591 kernel: ACPI: Interpreter enabled Apr 30 03:23:33.967599 kernel: ACPI: PM: (supports S0 S3 S5) Apr 30 03:23:33.967607 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:23:33.967614 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:23:33.967622 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 03:23:33.967630 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 30 03:23:33.967638 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 03:23:33.967945 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 03:23:33.968095 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 30 03:23:33.968222 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 30 03:23:33.968233 kernel: PCI host bridge to bus 0000:00 Apr 30 03:23:33.968395 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 03:23:33.968517 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 03:23:33.968634 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 03:23:33.968767 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 30 03:23:33.968883 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 30 03:23:33.969018 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 30 03:23:33.969136 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 03:23:33.969305 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 30 03:23:33.969452 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 30 03:23:33.969636 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 30 03:23:33.969950 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 30 03:23:33.970123 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 30 03:23:33.970263 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 30 03:23:33.970399 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 03:23:33.970560 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 30 03:23:33.970733 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 30 03:23:33.970904 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 30 03:23:33.971106 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 30 03:23:33.971261 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 30 03:23:33.971391 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 30 03:23:33.971519 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 30 03:23:33.971646 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 30 03:23:33.971802 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 30 03:23:33.971985 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 30 03:23:33.973141 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 30 03:23:33.973285 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 30 03:23:33.973414 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 30 03:23:33.973591 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 30 03:23:33.973737 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 30 03:23:33.973883 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 30 03:23:33.974043 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 30 03:23:33.974169 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 30 03:23:33.974315 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 30 03:23:33.974443 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 30 03:23:33.974454 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 03:23:33.974462 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 03:23:33.974470 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 03:23:33.974483 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 03:23:33.974490 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 30 03:23:33.974498 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 30 03:23:33.974506 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 30 03:23:33.974514 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 30 03:23:33.974521 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 30 03:23:33.974530 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 30 03:23:33.974537 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 30 03:23:33.974545 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 30 03:23:33.974555 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 30 03:23:33.974563 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 30 03:23:33.974570 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 30 03:23:33.974578 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 30 03:23:33.974586 kernel: iommu: Default domain type: Translated Apr 30 03:23:33.974594 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:23:33.974602 kernel: efivars: Registered efivars operations Apr 30 03:23:33.974610 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:23:33.974617 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 03:23:33.974628 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 30 03:23:33.974636 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 30 03:23:33.974643 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 30 03:23:33.974651 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 30 03:23:33.974787 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 30 03:23:33.974926 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 30 03:23:33.975070 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 03:23:33.975082 kernel: vgaarb: loaded Apr 30 03:23:33.975090 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 30 03:23:33.975103 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 30 03:23:33.975111 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 03:23:33.975119 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:23:33.975128 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:23:33.975136 kernel: pnp: PnP ACPI init Apr 30 03:23:33.975293 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 30 03:23:33.975306 kernel: pnp: PnP ACPI: found 6 devices Apr 30 03:23:33.975314 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:23:33.975326 kernel: NET: Registered PF_INET protocol family Apr 30 03:23:33.975334 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 03:23:33.975342 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 03:23:33.975350 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:23:33.975358 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 03:23:33.975366 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 03:23:33.975374 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 03:23:33.975382 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 03:23:33.975389 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 03:23:33.975400 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:23:33.975408 kernel: NET: Registered PF_XDP protocol family Apr 30 03:23:33.975537 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 30 03:23:33.975674 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 30 03:23:33.975797 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 03:23:33.975926 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 03:23:33.976070 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 03:23:33.976187 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 30 03:23:33.976309 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 30 03:23:33.976425 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 30 03:23:33.976436 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:23:33.976444 kernel: Initialise system trusted keyrings Apr 30 03:23:33.976452 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 03:23:33.976460 kernel: Key type asymmetric registered Apr 30 03:23:33.976468 kernel: Asymmetric key parser 'x509' registered Apr 30 03:23:33.976476 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:23:33.976489 kernel: io scheduler mq-deadline registered Apr 30 03:23:33.976496 kernel: io scheduler kyber registered Apr 30 03:23:33.976504 kernel: io scheduler bfq registered Apr 30 03:23:33.976512 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:23:33.976520 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 30 03:23:33.976528 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 30 03:23:33.976536 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 30 03:23:33.976544 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:23:33.976552 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:23:33.976560 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 03:23:33.976571 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 03:23:33.976579 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 03:23:33.976729 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 30 03:23:33.976741 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 03:23:33.976861 kernel: rtc_cmos 00:04: registered as rtc0 Apr 30 03:23:33.977013 kernel: rtc_cmos 00:04: setting system clock to 2025-04-30T03:23:33 UTC (1745983413) Apr 30 03:23:33.977134 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 30 03:23:33.977150 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 30 03:23:33.977158 kernel: efifb: probing for efifb Apr 30 03:23:33.977166 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 30 03:23:33.977174 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 30 03:23:33.977181 kernel: efifb: scrolling: redraw Apr 30 03:23:33.977189 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 30 03:23:33.977197 kernel: Console: switching to colour frame buffer device 100x37 Apr 30 03:23:33.977224 kernel: fb0: EFI VGA frame buffer device Apr 30 03:23:33.977235 kernel: pstore: Using crash dump compression: deflate Apr 30 03:23:33.977245 kernel: pstore: Registered efi_pstore as persistent store backend Apr 30 03:23:33.977253 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:23:33.977261 kernel: Segment Routing with IPv6 Apr 30 03:23:33.977269 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:23:33.977277 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:23:33.977285 kernel: Key type dns_resolver registered Apr 30 03:23:33.977293 kernel: IPI shorthand broadcast: enabled Apr 30 03:23:33.977301 kernel: sched_clock: Marking stable (1261002567, 120992096)->(1520632953, -138638290) Apr 30 03:23:33.977309 kernel: registered taskstats version 1 Apr 30 03:23:33.977317 kernel: Loading compiled-in X.509 certificates Apr 30 03:23:33.977329 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:23:33.977337 kernel: Key type .fscrypt registered Apr 30 03:23:33.977345 kernel: Key type fscrypt-provisioning registered Apr 30 03:23:33.977353 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 03:23:33.977361 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:23:33.977369 kernel: ima: No architecture policies found Apr 30 03:23:33.977377 kernel: clk: Disabling unused clocks Apr 30 03:23:33.977385 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:23:33.977396 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:23:33.977404 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:23:33.977412 kernel: Run /init as init process Apr 30 03:23:33.977420 kernel: with arguments: Apr 30 03:23:33.977427 kernel: /init Apr 30 03:23:33.977435 kernel: with environment: Apr 30 03:23:33.977443 kernel: HOME=/ Apr 30 03:23:33.977451 kernel: TERM=linux Apr 30 03:23:33.977459 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:23:33.977477 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:23:33.977490 systemd[1]: Detected virtualization kvm. Apr 30 03:23:33.977502 systemd[1]: Detected architecture x86-64. Apr 30 03:23:33.977514 systemd[1]: Running in initrd. Apr 30 03:23:33.977534 systemd[1]: No hostname configured, using default hostname. Apr 30 03:23:33.977543 systemd[1]: Hostname set to . Apr 30 03:23:33.977552 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:23:33.977561 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:23:33.977570 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:23:33.977578 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:23:33.977588 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:23:33.977597 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:23:33.977609 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:23:33.977618 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:23:33.977628 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:23:33.977637 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:23:33.977646 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:23:33.977655 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:23:33.977673 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:23:33.977685 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:23:33.977693 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:23:33.977702 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:23:33.977711 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:23:33.977720 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:23:33.977729 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:23:33.977737 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:23:33.977746 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:23:33.977758 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:23:33.977766 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:23:33.977775 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:23:33.977784 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:23:33.977793 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:23:33.977801 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:23:33.977810 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:23:33.977819 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:23:33.977827 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:23:33.977839 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:23:33.977848 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:23:33.977857 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:23:33.977865 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:23:33.977896 systemd-journald[193]: Collecting audit messages is disabled. Apr 30 03:23:33.977935 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:23:33.977943 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:23:33.977953 systemd-journald[193]: Journal started Apr 30 03:23:33.977974 systemd-journald[193]: Runtime Journal (/run/log/journal/15e45a49facb42a397118887b56a9f21) is 6.0M, max 48.3M, 42.2M free. Apr 30 03:23:33.968356 systemd-modules-load[194]: Inserted module 'overlay' Apr 30 03:23:33.980418 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:23:33.984944 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:23:33.986389 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:23:33.987377 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:23:33.989482 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:23:34.002176 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:23:34.005110 systemd-modules-load[194]: Inserted module 'br_netfilter' Apr 30 03:23:34.006124 kernel: Bridge firewalling registered Apr 30 03:23:34.007476 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:23:34.010281 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:23:34.012023 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:23:34.026122 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:23:34.027651 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:23:34.031453 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:23:34.038652 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:23:34.096739 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:23:34.107738 dracut-cmdline[228]: dracut-dracut-053 Apr 30 03:23:34.110975 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:23:34.132260 systemd-resolved[231]: Positive Trust Anchors: Apr 30 03:23:34.132278 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:23:34.132310 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:23:34.135104 systemd-resolved[231]: Defaulting to hostname 'linux'. Apr 30 03:23:34.136424 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:23:34.155538 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:23:34.202968 kernel: SCSI subsystem initialized Apr 30 03:23:34.233969 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:23:34.246975 kernel: iscsi: registered transport (tcp) Apr 30 03:23:34.269123 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:23:34.269237 kernel: QLogic iSCSI HBA Driver Apr 30 03:23:34.321998 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:23:34.339166 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:23:34.397370 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:23:34.397483 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:23:34.397497 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:23:34.474006 kernel: raid6: avx2x4 gen() 25423 MB/s Apr 30 03:23:34.490992 kernel: raid6: avx2x2 gen() 26308 MB/s Apr 30 03:23:34.508143 kernel: raid6: avx2x1 gen() 25309 MB/s Apr 30 03:23:34.508244 kernel: raid6: using algorithm avx2x2 gen() 26308 MB/s Apr 30 03:23:34.540405 kernel: raid6: .... xor() 19399 MB/s, rmw enabled Apr 30 03:23:34.540514 kernel: raid6: using avx2x2 recovery algorithm Apr 30 03:23:34.570059 kernel: xor: automatically using best checksumming function avx Apr 30 03:23:34.740967 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:23:34.759474 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:23:34.774390 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:23:34.788557 systemd-udevd[414]: Using default interface naming scheme 'v255'. Apr 30 03:23:34.793893 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:23:34.810256 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:23:34.829997 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Apr 30 03:23:34.869526 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:23:34.878194 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:23:34.952074 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:23:34.962347 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:23:34.993156 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:23:35.011076 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:23:35.011107 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 30 03:23:35.034542 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 30 03:23:35.034885 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:23:35.034906 kernel: AES CTR mode by8 optimization enabled Apr 30 03:23:35.034937 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 03:23:35.034953 kernel: GPT:9289727 != 19775487 Apr 30 03:23:35.034967 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 03:23:35.034980 kernel: GPT:9289727 != 19775487 Apr 30 03:23:35.034991 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 03:23:35.035001 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:23:35.035012 kernel: libata version 3.00 loaded. Apr 30 03:23:35.006425 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:23:35.050398 kernel: ahci 0000:00:1f.2: version 3.0 Apr 30 03:23:35.086594 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 30 03:23:35.086612 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 30 03:23:35.086799 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 30 03:23:35.087121 kernel: scsi host0: ahci Apr 30 03:23:35.087293 kernel: scsi host1: ahci Apr 30 03:23:35.087492 kernel: scsi host2: ahci Apr 30 03:23:35.087662 kernel: scsi host3: ahci Apr 30 03:23:35.087829 kernel: scsi host4: ahci Apr 30 03:23:35.088005 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (457) Apr 30 03:23:35.088018 kernel: scsi host5: ahci Apr 30 03:23:35.088168 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 30 03:23:35.088180 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 30 03:23:35.088196 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 30 03:23:35.088206 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 30 03:23:35.088216 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 30 03:23:35.088227 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 30 03:23:35.088240 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (472) Apr 30 03:23:35.008252 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:23:35.009794 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:23:35.017101 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:23:35.047699 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:23:35.057124 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:23:35.057192 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:23:35.059727 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:23:35.062593 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:23:35.062690 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:23:35.074863 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:23:35.090146 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:23:35.145867 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:23:35.164785 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 30 03:23:35.182346 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 30 03:23:35.201806 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 30 03:23:35.209723 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 30 03:23:35.232475 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 03:23:35.244071 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:23:35.247287 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:23:35.272508 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:23:35.381070 disk-uuid[566]: Primary Header is updated. Apr 30 03:23:35.381070 disk-uuid[566]: Secondary Entries is updated. Apr 30 03:23:35.381070 disk-uuid[566]: Secondary Header is updated. Apr 30 03:23:35.384949 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:23:35.389942 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:23:35.399956 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 30 03:23:35.399999 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 30 03:23:35.400026 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 30 03:23:35.400041 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 30 03:23:35.400058 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 30 03:23:35.400949 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 30 03:23:35.401944 kernel: ata3.00: applying bridge limits Apr 30 03:23:35.403965 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 30 03:23:35.404035 kernel: ata3.00: configured for UDMA/100 Apr 30 03:23:35.406045 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 30 03:23:35.454393 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 30 03:23:35.469686 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 03:23:35.469712 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 30 03:23:36.395945 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 03:23:36.396498 disk-uuid[576]: The operation has completed successfully. Apr 30 03:23:36.435293 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:23:36.435487 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:23:36.459236 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:23:36.466358 sh[591]: Success Apr 30 03:23:36.482948 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 30 03:23:36.520763 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:23:36.535936 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:23:36.539242 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:23:36.589649 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:23:36.589745 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:23:36.589758 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:23:36.591777 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:23:36.591795 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:23:36.597876 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:23:36.642230 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:23:36.654099 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:23:36.656569 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:23:36.666364 kernel: BTRFS info (device vda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:23:36.666415 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:23:36.666426 kernel: BTRFS info (device vda6): using free space tree Apr 30 03:23:36.705954 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 03:23:36.716417 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:23:36.724260 kernel: BTRFS info (device vda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:23:36.777165 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:23:36.789150 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:23:36.862785 systemd-networkd[769]: lo: Link UP Apr 30 03:23:36.862798 systemd-networkd[769]: lo: Gained carrier Apr 30 03:23:36.864824 systemd-networkd[769]: Enumeration completed Apr 30 03:23:36.864945 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:23:36.865355 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:23:36.865360 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:23:36.884377 systemd-networkd[769]: eth0: Link UP Apr 30 03:23:36.884386 systemd-networkd[769]: eth0: Gained carrier Apr 30 03:23:36.884402 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:23:36.884727 systemd[1]: Reached target network.target - Network. Apr 30 03:23:36.948021 systemd-networkd[769]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 03:23:37.052934 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:23:37.059331 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:23:37.123866 ignition[774]: Ignition 2.19.0 Apr 30 03:23:37.123884 ignition[774]: Stage: fetch-offline Apr 30 03:23:37.123954 ignition[774]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:23:37.123970 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 03:23:37.124130 ignition[774]: parsed url from cmdline: "" Apr 30 03:23:37.124136 ignition[774]: no config URL provided Apr 30 03:23:37.124144 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:23:37.124158 ignition[774]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:23:37.124197 ignition[774]: op(1): [started] loading QEMU firmware config module Apr 30 03:23:37.124209 ignition[774]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 30 03:23:37.150600 ignition[774]: op(1): [finished] loading QEMU firmware config module Apr 30 03:23:37.190232 ignition[774]: parsing config with SHA512: 286ee94dbe78f9ecea8b1701f64102f8830e959be39d75c38d44334bbd2c236d68afb3ff4e5e4bab1e1c2872aba8434b96e6293a15c7d3053472c69f48a8ed74 Apr 30 03:23:37.195549 unknown[774]: fetched base config from "system" Apr 30 03:23:37.195564 unknown[774]: fetched user config from "qemu" Apr 30 03:23:37.196100 ignition[774]: fetch-offline: fetch-offline passed Apr 30 03:23:37.196177 ignition[774]: Ignition finished successfully Apr 30 03:23:37.219698 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:23:37.222307 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 30 03:23:37.246249 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:23:37.262422 ignition[783]: Ignition 2.19.0 Apr 30 03:23:37.262434 ignition[783]: Stage: kargs Apr 30 03:23:37.262611 ignition[783]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:23:37.262624 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 03:23:37.263543 ignition[783]: kargs: kargs passed Apr 30 03:23:37.263602 ignition[783]: Ignition finished successfully Apr 30 03:23:37.270365 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:23:37.301079 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:23:37.317773 ignition[792]: Ignition 2.19.0 Apr 30 03:23:37.317789 ignition[792]: Stage: disks Apr 30 03:23:37.317992 ignition[792]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:23:37.318003 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 03:23:37.357370 ignition[792]: disks: disks passed Apr 30 03:23:37.357433 ignition[792]: Ignition finished successfully Apr 30 03:23:37.361314 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:23:37.361767 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:23:37.362168 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:23:37.362499 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:23:37.362848 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:23:37.363181 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:23:37.382255 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:23:37.395061 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 03:23:37.691569 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:23:37.704095 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:23:37.807960 kernel: EXT4-fs (vda9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:23:37.809272 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:23:37.810162 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:23:37.824125 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:23:37.843310 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:23:37.843819 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 03:23:37.843876 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:23:37.843905 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:23:37.858772 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (810) Apr 30 03:23:37.858831 kernel: BTRFS info (device vda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:23:37.858852 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:23:37.860463 kernel: BTRFS info (device vda6): using free space tree Apr 30 03:23:37.863956 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 03:23:37.865785 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:23:37.904173 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:23:37.921253 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:23:37.974265 systemd-networkd[769]: eth0: Gained IPv6LL Apr 30 03:23:37.984930 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:23:37.990254 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:23:37.995223 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:23:38.000293 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:23:38.128468 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:23:38.143049 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:23:38.144869 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:23:38.152181 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:23:38.153689 kernel: BTRFS info (device vda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:23:38.176016 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:23:38.190306 ignition[924]: INFO : Ignition 2.19.0 Apr 30 03:23:38.190306 ignition[924]: INFO : Stage: mount Apr 30 03:23:38.192288 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:23:38.192288 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 03:23:38.192288 ignition[924]: INFO : mount: mount passed Apr 30 03:23:38.192288 ignition[924]: INFO : Ignition finished successfully Apr 30 03:23:38.194065 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:23:38.213166 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:23:38.822187 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:23:38.830947 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (938) Apr 30 03:23:38.832981 kernel: BTRFS info (device vda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:23:38.833007 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:23:38.833022 kernel: BTRFS info (device vda6): using free space tree Apr 30 03:23:38.835942 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 03:23:38.837898 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:23:38.863323 ignition[955]: INFO : Ignition 2.19.0 Apr 30 03:23:38.863323 ignition[955]: INFO : Stage: files Apr 30 03:23:38.865376 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:23:38.865376 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 03:23:38.865376 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:23:38.869025 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:23:38.869025 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:23:38.872427 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:23:38.872427 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:23:38.872427 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:23:38.872427 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Apr 30 03:23:38.872427 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Apr 30 03:23:38.869742 unknown[955]: wrote ssh authorized keys file for user: core Apr 30 03:23:38.950415 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 03:23:39.099246 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Apr 30 03:23:39.099246 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 03:23:39.103892 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 30 03:23:39.455760 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 03:23:39.620062 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 03:23:39.622439 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:23:39.622439 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:23:39.622439 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:23:39.622439 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:23:39.622439 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:23:39.622439 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:23:39.622439 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:23:39.622439 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:23:39.622439 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:23:39.622439 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:23:39.622439 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:23:39.622439 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:23:39.622439 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:23:39.622439 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Apr 30 03:23:40.031513 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 03:23:40.714276 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 03:23:40.714276 ignition[955]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 30 03:23:40.728234 ignition[955]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:23:40.730862 ignition[955]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:23:40.730862 ignition[955]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 30 03:23:40.730862 ignition[955]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 30 03:23:40.730862 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 03:23:40.730862 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 03:23:40.730862 ignition[955]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 30 03:23:40.730862 ignition[955]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 30 03:23:40.793081 ignition[955]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 03:23:40.799487 ignition[955]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 03:23:40.801234 ignition[955]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 30 03:23:40.801234 ignition[955]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:23:40.801234 ignition[955]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:23:40.801234 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:23:40.801234 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:23:40.801234 ignition[955]: INFO : files: files passed Apr 30 03:23:40.801234 ignition[955]: INFO : Ignition finished successfully Apr 30 03:23:40.812988 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:23:40.823173 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:23:40.825411 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:23:40.827571 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:23:40.827694 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:23:40.836503 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Apr 30 03:23:40.839505 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:23:40.839505 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:23:40.844346 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:23:40.842200 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:23:40.844999 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:23:40.854160 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:23:40.885372 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:23:40.885544 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:23:40.888295 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:23:40.890411 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:23:40.892542 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:23:40.902219 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:23:40.919936 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:23:40.927173 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:23:40.941435 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:23:40.943047 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:23:40.945578 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:23:40.947711 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:23:40.947851 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:23:40.950245 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:23:40.951829 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:23:40.953981 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:23:40.956162 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:23:40.958526 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:23:40.960993 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:23:40.963290 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:23:40.965664 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:23:40.967732 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:23:40.969988 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:23:40.971781 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:23:40.971930 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:23:40.974302 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:23:40.975781 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:23:40.977961 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:23:40.978090 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:23:40.980285 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:23:40.980401 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:23:40.983041 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:23:40.983247 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:23:40.985108 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:23:40.986953 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:23:40.991019 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:23:40.992873 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:23:40.994865 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:23:40.996741 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:23:40.996843 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:23:40.998807 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:23:40.998902 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:23:41.001384 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:23:41.001512 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:23:41.003900 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:23:41.004051 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:23:41.014193 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:23:41.016043 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:23:41.016222 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:23:41.019500 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:23:41.020440 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:23:41.020603 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:23:41.023266 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:23:41.023392 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:23:41.031165 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:23:41.031303 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:23:41.034758 ignition[1011]: INFO : Ignition 2.19.0 Apr 30 03:23:41.034758 ignition[1011]: INFO : Stage: umount Apr 30 03:23:41.034758 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:23:41.034758 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 03:23:41.034758 ignition[1011]: INFO : umount: umount passed Apr 30 03:23:41.034758 ignition[1011]: INFO : Ignition finished successfully Apr 30 03:23:41.035762 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:23:41.035910 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:23:41.038438 systemd[1]: Stopped target network.target - Network. Apr 30 03:23:41.039896 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:23:41.040001 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:23:41.042253 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:23:41.042305 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:23:41.044201 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:23:41.044266 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:23:41.046315 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:23:41.046375 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:23:41.046656 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:23:41.047245 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:23:41.053361 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:23:41.054007 systemd-networkd[769]: eth0: DHCPv6 lease lost Apr 30 03:23:41.055702 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:23:41.055846 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:23:41.060072 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:23:41.060223 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:23:41.062782 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:23:41.062907 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:23:41.066441 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:23:41.066524 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:23:41.068385 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:23:41.068451 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:23:41.082109 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:23:41.083758 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:23:41.083863 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:23:41.086220 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:23:41.086288 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:23:41.088239 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:23:41.088295 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:23:41.090645 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:23:41.090701 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:23:41.093417 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:23:41.107006 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:23:41.107176 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:23:41.111439 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:23:41.112695 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:23:41.115867 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:23:41.115944 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:23:41.119143 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:23:41.119196 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:23:41.121455 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:23:41.121521 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:23:41.125597 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:23:41.125658 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:23:41.128669 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:23:41.128729 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:23:41.150171 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:23:41.152929 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:23:41.153025 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:23:41.157235 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:23:41.157309 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:23:41.161472 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:23:41.162828 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:23:41.166272 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:23:41.169825 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:23:41.182064 systemd[1]: Switching root. Apr 30 03:23:41.213767 systemd-journald[193]: Journal stopped Apr 30 03:23:42.742250 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Apr 30 03:23:42.742340 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 03:23:42.742355 kernel: SELinux: policy capability open_perms=1 Apr 30 03:23:42.742367 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 03:23:42.742383 kernel: SELinux: policy capability always_check_network=0 Apr 30 03:23:42.742395 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 03:23:42.742406 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 03:23:42.742428 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 03:23:42.742439 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 03:23:42.742451 kernel: audit: type=1403 audit(1745983421.753:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 03:23:42.742479 systemd[1]: Successfully loaded SELinux policy in 43.211ms. Apr 30 03:23:42.742500 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.497ms. Apr 30 03:23:42.742514 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:23:42.742529 systemd[1]: Detected virtualization kvm. Apr 30 03:23:42.742542 systemd[1]: Detected architecture x86-64. Apr 30 03:23:42.742554 systemd[1]: Detected first boot. Apr 30 03:23:42.742566 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:23:42.742578 zram_generator::config[1055]: No configuration found. Apr 30 03:23:42.742591 systemd[1]: Populated /etc with preset unit settings. Apr 30 03:23:42.742603 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 03:23:42.742616 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 03:23:42.742632 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 03:23:42.742645 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 03:23:42.742657 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 03:23:42.742670 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 03:23:42.742682 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 03:23:42.742695 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 03:23:42.742708 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 03:23:42.742720 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 03:23:42.742733 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 03:23:42.742748 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:23:42.742760 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:23:42.742773 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 03:23:42.742785 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 03:23:42.742798 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 03:23:42.742811 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:23:42.742823 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 03:23:42.742836 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:23:42.742848 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 03:23:42.742862 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 03:23:42.742875 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 03:23:42.742887 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 03:23:42.742900 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:23:42.742912 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:23:42.742960 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:23:42.742973 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:23:42.742985 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 03:23:42.743002 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 03:23:42.743014 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:23:42.743027 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:23:42.743039 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:23:42.743051 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 03:23:42.743063 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 03:23:42.743076 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 03:23:42.743088 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 03:23:42.743105 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:23:42.743120 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 03:23:42.743132 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 03:23:42.743144 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 03:23:42.743157 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 03:23:42.743169 systemd[1]: Reached target machines.target - Containers. Apr 30 03:23:42.743181 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 03:23:42.743194 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:23:42.743207 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:23:42.743221 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 03:23:42.743234 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:23:42.743246 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:23:42.743258 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:23:42.743271 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 03:23:42.743283 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:23:42.743295 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 03:23:42.743308 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 03:23:42.743325 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 03:23:42.743337 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 03:23:42.744304 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 03:23:42.744317 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:23:42.744329 kernel: fuse: init (API version 7.39) Apr 30 03:23:42.744341 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:23:42.744353 kernel: loop: module loaded Apr 30 03:23:42.744364 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 03:23:42.744376 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 03:23:42.744393 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:23:42.744406 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 03:23:42.744418 systemd[1]: Stopped verity-setup.service. Apr 30 03:23:42.744431 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:23:42.744444 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 03:23:42.744456 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 03:23:42.744482 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 03:23:42.744497 kernel: ACPI: bus type drm_connector registered Apr 30 03:23:42.744509 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 03:23:42.744544 systemd-journald[1125]: Collecting audit messages is disabled. Apr 30 03:23:42.744566 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 03:23:42.744578 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 03:23:42.744601 systemd-journald[1125]: Journal started Apr 30 03:23:42.744622 systemd-journald[1125]: Runtime Journal (/run/log/journal/15e45a49facb42a397118887b56a9f21) is 6.0M, max 48.3M, 42.2M free. Apr 30 03:23:42.396125 systemd[1]: Queued start job for default target multi-user.target. Apr 30 03:23:42.416906 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 30 03:23:42.417450 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 03:23:42.747026 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 03:23:42.749124 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:23:42.750162 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:23:42.751835 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 03:23:42.752091 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 03:23:42.753682 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:23:42.753884 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:23:42.755419 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:23:42.755620 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:23:42.757038 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:23:42.757227 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:23:42.758811 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 03:23:42.759116 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 03:23:42.760562 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:23:42.760746 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:23:42.762408 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:23:42.763931 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 03:23:42.765864 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 03:23:42.788091 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 03:23:42.803124 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 03:23:42.805897 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 03:23:42.807090 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 03:23:42.807123 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:23:42.809174 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 03:23:42.811646 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 03:23:42.818754 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 03:23:42.821104 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:23:42.823320 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 03:23:42.830053 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 03:23:42.831434 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:23:42.834347 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 03:23:42.835658 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:23:42.841359 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:23:42.843898 systemd-journald[1125]: Time spent on flushing to /var/log/journal/15e45a49facb42a397118887b56a9f21 is 19.407ms for 992 entries. Apr 30 03:23:42.843898 systemd-journald[1125]: System Journal (/var/log/journal/15e45a49facb42a397118887b56a9f21) is 8.0M, max 195.6M, 187.6M free. Apr 30 03:23:42.898161 systemd-journald[1125]: Received client request to flush runtime journal. Apr 30 03:23:42.898407 kernel: loop0: detected capacity change from 0 to 218376 Apr 30 03:23:42.846980 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 03:23:42.852112 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 03:23:42.856977 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:23:42.858953 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 03:23:42.860330 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 03:23:42.861897 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 03:23:42.864485 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 03:23:42.875329 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 03:23:42.890596 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 03:23:42.898779 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 03:23:42.900714 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:23:42.902597 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 03:23:42.907636 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 03:23:42.917596 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 03:23:42.922657 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 03:23:42.924688 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 03:23:42.926344 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 03:23:42.935079 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:23:42.940934 kernel: loop1: detected capacity change from 0 to 142488 Apr 30 03:23:42.960943 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Apr 30 03:23:42.961376 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Apr 30 03:23:42.968240 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:23:43.082958 kernel: loop2: detected capacity change from 0 to 140768 Apr 30 03:23:43.125969 kernel: loop3: detected capacity change from 0 to 218376 Apr 30 03:23:43.161957 kernel: loop4: detected capacity change from 0 to 142488 Apr 30 03:23:43.173935 kernel: loop5: detected capacity change from 0 to 140768 Apr 30 03:23:43.183166 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 30 03:23:43.183816 (sd-merge)[1193]: Merged extensions into '/usr'. Apr 30 03:23:43.189652 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 03:23:43.189674 systemd[1]: Reloading... Apr 30 03:23:43.267959 zram_generator::config[1219]: No configuration found. Apr 30 03:23:43.434016 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:23:43.491244 ldconfig[1164]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 03:23:43.499338 systemd[1]: Reloading finished in 309 ms. Apr 30 03:23:43.536959 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 03:23:43.538741 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 03:23:43.556300 systemd[1]: Starting ensure-sysext.service... Apr 30 03:23:43.559433 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:23:43.564865 systemd[1]: Reloading requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... Apr 30 03:23:43.564998 systemd[1]: Reloading... Apr 30 03:23:43.650880 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 03:23:43.651323 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 03:23:43.652416 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 03:23:43.652735 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Apr 30 03:23:43.652830 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Apr 30 03:23:43.659946 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:23:43.659962 systemd-tmpfiles[1257]: Skipping /boot Apr 30 03:23:43.680561 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:23:43.680741 systemd-tmpfiles[1257]: Skipping /boot Apr 30 03:23:43.682945 zram_generator::config[1283]: No configuration found. Apr 30 03:23:43.802667 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:23:43.855163 systemd[1]: Reloading finished in 289 ms. Apr 30 03:23:43.876797 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 03:23:43.889770 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:23:43.901031 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:23:43.904777 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 03:23:43.908041 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 03:23:43.913315 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:23:43.919234 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:23:43.925314 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 03:23:43.931804 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:23:43.932130 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:23:43.935027 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:23:43.938899 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:23:43.949253 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:23:43.950767 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:23:43.953148 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 03:23:43.954499 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:23:43.955368 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Apr 30 03:23:43.955625 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 03:23:43.958586 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:23:43.958790 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:23:43.960842 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:23:43.961035 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:23:43.963115 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:23:43.964274 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:23:43.973777 augenrules[1348]: No rules Apr 30 03:23:43.977611 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:23:43.982970 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 03:23:43.985240 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:23:43.991762 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:23:43.992080 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:23:44.005460 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:23:44.008638 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:23:44.014001 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:23:44.015419 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:23:44.021469 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:23:44.030203 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 03:23:44.031530 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:23:44.032539 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 03:23:44.043553 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 03:23:44.045818 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:23:44.046039 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:23:44.049185 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:23:44.049452 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:23:44.051887 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:23:44.052154 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:23:44.064434 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 03:23:44.076381 systemd[1]: Finished ensure-sysext.service. Apr 30 03:23:44.086344 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 03:23:44.086942 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1375) Apr 30 03:23:44.089065 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:23:44.089282 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:23:44.099206 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:23:44.103217 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:23:44.106119 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:23:44.109363 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:23:44.111430 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:23:44.114250 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 03:23:44.116279 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:23:44.116318 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:23:44.117023 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:23:44.117282 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:23:44.122561 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:23:44.122812 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:23:44.125664 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:23:44.126133 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:23:44.139328 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:23:44.139889 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:23:44.159253 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:23:44.159354 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:23:44.166554 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 03:23:44.168876 systemd-resolved[1327]: Positive Trust Anchors: Apr 30 03:23:44.168977 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:23:44.169022 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:23:44.173233 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 03:23:44.176502 systemd-networkd[1380]: lo: Link UP Apr 30 03:23:44.176509 systemd-networkd[1380]: lo: Gained carrier Apr 30 03:23:44.178754 systemd-networkd[1380]: Enumeration completed Apr 30 03:23:44.178883 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:23:44.180727 systemd-resolved[1327]: Defaulting to hostname 'linux'. Apr 30 03:23:44.183232 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:23:44.183245 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:23:44.184276 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:23:44.184319 systemd-networkd[1380]: eth0: Link UP Apr 30 03:23:44.184324 systemd-networkd[1380]: eth0: Gained carrier Apr 30 03:23:44.184338 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:23:44.188664 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 30 03:23:44.189191 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 03:23:44.190896 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:23:44.192461 systemd[1]: Reached target network.target - Network. Apr 30 03:23:44.193720 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:23:44.195948 kernel: ACPI: button: Power Button [PWRF] Apr 30 03:23:44.203057 systemd-networkd[1380]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 03:23:44.219034 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 30 03:23:44.217646 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 03:23:44.230031 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 30 03:23:44.233102 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 30 03:23:44.233335 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 30 03:23:44.233603 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 30 03:23:44.247333 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 03:23:44.247883 systemd-timesyncd[1401]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 30 03:23:44.248166 systemd-timesyncd[1401]: Initial clock synchronization to Wed 2025-04-30 03:23:44.646710 UTC. Apr 30 03:23:44.250099 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 03:23:44.268950 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 03:23:44.285234 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:23:44.289292 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:23:44.289532 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:23:44.296391 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:23:44.360319 kernel: kvm_amd: TSC scaling supported Apr 30 03:23:44.360393 kernel: kvm_amd: Nested Virtualization enabled Apr 30 03:23:44.360407 kernel: kvm_amd: Nested Paging enabled Apr 30 03:23:44.361496 kernel: kvm_amd: LBR virtualization supported Apr 30 03:23:44.361525 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Apr 30 03:23:44.362153 kernel: kvm_amd: Virtual GIF supported Apr 30 03:23:44.383964 kernel: EDAC MC: Ver: 3.0.0 Apr 30 03:23:44.399752 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:23:44.424778 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 03:23:44.435193 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 03:23:44.445503 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:23:44.477681 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 03:23:44.479486 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:23:44.480790 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:23:44.482192 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 03:23:44.483808 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 03:23:44.485413 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 03:23:44.486784 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 03:23:44.488070 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 03:23:44.489338 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 03:23:44.489371 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:23:44.490343 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:23:44.492383 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 03:23:44.495421 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 03:23:44.504103 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 03:23:44.506637 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 03:23:44.508470 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 03:23:44.509735 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:23:44.510807 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:23:44.511878 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:23:44.511955 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:23:44.513373 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 03:23:44.515901 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 03:23:44.519975 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:23:44.520266 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 03:23:44.524458 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 03:23:44.528003 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 03:23:44.530044 jq[1439]: false Apr 30 03:23:44.530478 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 03:23:44.533079 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 03:23:44.536082 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 03:23:44.539104 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 03:23:44.546133 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 03:23:44.547963 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 03:23:44.548532 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 03:23:44.550138 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 03:23:44.551051 extend-filesystems[1440]: Found loop3 Apr 30 03:23:44.552203 extend-filesystems[1440]: Found loop4 Apr 30 03:23:44.552203 extend-filesystems[1440]: Found loop5 Apr 30 03:23:44.552203 extend-filesystems[1440]: Found sr0 Apr 30 03:23:44.552203 extend-filesystems[1440]: Found vda Apr 30 03:23:44.552203 extend-filesystems[1440]: Found vda1 Apr 30 03:23:44.552203 extend-filesystems[1440]: Found vda2 Apr 30 03:23:44.552203 extend-filesystems[1440]: Found vda3 Apr 30 03:23:44.552203 extend-filesystems[1440]: Found usr Apr 30 03:23:44.552203 extend-filesystems[1440]: Found vda4 Apr 30 03:23:44.552203 extend-filesystems[1440]: Found vda6 Apr 30 03:23:44.552203 extend-filesystems[1440]: Found vda7 Apr 30 03:23:44.573571 extend-filesystems[1440]: Found vda9 Apr 30 03:23:44.573571 extend-filesystems[1440]: Checking size of /dev/vda9 Apr 30 03:23:44.552688 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 03:23:44.556208 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 03:23:44.576640 jq[1448]: true Apr 30 03:23:44.559415 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 03:23:44.559658 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 03:23:44.570135 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 03:23:44.570655 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 03:23:44.584672 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 03:23:44.586025 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 03:23:44.592689 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1369) Apr 30 03:23:44.597740 extend-filesystems[1440]: Resized partition /dev/vda9 Apr 30 03:23:44.600369 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 03:23:44.606375 update_engine[1447]: I20250430 03:23:44.602625 1447 main.cc:92] Flatcar Update Engine starting Apr 30 03:23:44.606719 extend-filesystems[1473]: resize2fs 1.47.1 (20-May-2024) Apr 30 03:23:44.612391 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 30 03:23:44.612086 dbus-daemon[1438]: [system] SELinux support is enabled Apr 30 03:23:44.612619 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 03:23:44.613747 jq[1464]: true Apr 30 03:23:44.617903 update_engine[1447]: I20250430 03:23:44.617842 1447 update_check_scheduler.cc:74] Next update check in 7m52s Apr 30 03:23:44.629715 tar[1452]: linux-amd64/LICENSE Apr 30 03:23:44.630007 tar[1452]: linux-amd64/helm Apr 30 03:23:44.637163 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 30 03:23:44.637072 systemd[1]: Started update-engine.service - Update Engine. Apr 30 03:23:44.638610 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 03:23:44.638639 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 03:23:44.640160 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 03:23:44.640176 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 03:23:44.650286 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 03:23:44.664224 systemd-logind[1446]: Watching system buttons on /dev/input/event1 (Power Button) Apr 30 03:23:44.664256 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 03:23:44.666463 systemd-logind[1446]: New seat seat0. Apr 30 03:23:44.667161 extend-filesystems[1473]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 30 03:23:44.667161 extend-filesystems[1473]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 30 03:23:44.667161 extend-filesystems[1473]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 30 03:23:44.674339 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 03:23:44.675810 extend-filesystems[1440]: Resized filesystem in /dev/vda9 Apr 30 03:23:44.677160 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 03:23:44.677414 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 03:23:44.683565 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 03:23:44.691942 bash[1493]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:23:44.692857 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 03:23:44.695284 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 30 03:23:44.797827 sshd_keygen[1472]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 03:23:44.807911 containerd[1466]: time="2025-04-30T03:23:44.807814488Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 03:23:44.823689 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 03:23:44.832437 containerd[1466]: time="2025-04-30T03:23:44.832375077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:23:44.834429 containerd[1466]: time="2025-04-30T03:23:44.834379947Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:23:44.834429 containerd[1466]: time="2025-04-30T03:23:44.834417889Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 03:23:44.834515 containerd[1466]: time="2025-04-30T03:23:44.834440311Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 03:23:44.834687 containerd[1466]: time="2025-04-30T03:23:44.834666285Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 03:23:44.834687 containerd[1466]: time="2025-04-30T03:23:44.834686863Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 03:23:44.834804 containerd[1466]: time="2025-04-30T03:23:44.834757926Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:23:44.834804 containerd[1466]: time="2025-04-30T03:23:44.834776141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:23:44.835030 containerd[1466]: time="2025-04-30T03:23:44.835006593Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:23:44.835030 containerd[1466]: time="2025-04-30T03:23:44.835025859Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 03:23:44.835092 containerd[1466]: time="2025-04-30T03:23:44.835039805Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:23:44.835092 containerd[1466]: time="2025-04-30T03:23:44.835050635Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 03:23:44.835207 containerd[1466]: time="2025-04-30T03:23:44.835147387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:23:44.836523 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 03:23:44.838329 containerd[1466]: time="2025-04-30T03:23:44.838293749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:23:44.839028 containerd[1466]: time="2025-04-30T03:23:44.838706833Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:23:44.839028 containerd[1466]: time="2025-04-30T03:23:44.838728274Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 03:23:44.839028 containerd[1466]: time="2025-04-30T03:23:44.838843089Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 03:23:44.839028 containerd[1466]: time="2025-04-30T03:23:44.838901499Z" level=info msg="metadata content store policy set" policy=shared Apr 30 03:23:44.844205 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 03:23:44.844552 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 03:23:44.865512 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 03:23:44.880577 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 03:23:44.896100 containerd[1466]: time="2025-04-30T03:23:44.896030997Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 03:23:44.896100 containerd[1466]: time="2025-04-30T03:23:44.896111178Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 03:23:44.896237 containerd[1466]: time="2025-04-30T03:23:44.896133880Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 03:23:44.896237 containerd[1466]: time="2025-04-30T03:23:44.896154258Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 03:23:44.896237 containerd[1466]: time="2025-04-30T03:23:44.896173414Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 03:23:44.896433 containerd[1466]: time="2025-04-30T03:23:44.896395781Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 03:23:44.897049 containerd[1466]: time="2025-04-30T03:23:44.896733645Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 03:23:44.897174 containerd[1466]: time="2025-04-30T03:23:44.897155506Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 03:23:44.897232 containerd[1466]: time="2025-04-30T03:23:44.897217423Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 03:23:44.897282 containerd[1466]: time="2025-04-30T03:23:44.897269510Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 03:23:44.897334 containerd[1466]: time="2025-04-30T03:23:44.897322570Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 03:23:44.897356 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 03:23:44.898433 containerd[1466]: time="2025-04-30T03:23:44.898398939Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 03:23:44.898499 containerd[1466]: time="2025-04-30T03:23:44.898485501Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 03:23:44.898551 containerd[1466]: time="2025-04-30T03:23:44.898539593Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 03:23:44.898613 containerd[1466]: time="2025-04-30T03:23:44.898598553Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 03:23:44.898665 containerd[1466]: time="2025-04-30T03:23:44.898653075Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 03:23:44.899971 containerd[1466]: time="2025-04-30T03:23:44.898703240Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 03:23:44.899971 containerd[1466]: time="2025-04-30T03:23:44.898718658Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 03:23:44.899971 containerd[1466]: time="2025-04-30T03:23:44.898741251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 03:23:44.899971 containerd[1466]: time="2025-04-30T03:23:44.898753694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 03:23:44.899971 containerd[1466]: time="2025-04-30T03:23:44.898765566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 03:23:44.899971 containerd[1466]: time="2025-04-30T03:23:44.898778030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 03:23:44.899971 containerd[1466]: time="2025-04-30T03:23:44.898789972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 03:23:44.899971 containerd[1466]: time="2025-04-30T03:23:44.898803668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 03:23:44.899971 containerd[1466]: time="2025-04-30T03:23:44.898815781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 03:23:44.899971 containerd[1466]: time="2025-04-30T03:23:44.898829717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 03:23:44.899971 containerd[1466]: time="2025-04-30T03:23:44.898842310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 03:23:44.899971 containerd[1466]: time="2025-04-30T03:23:44.898857258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 03:23:44.899971 containerd[1466]: time="2025-04-30T03:23:44.898869622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 03:23:44.899971 containerd[1466]: time="2025-04-30T03:23:44.898881113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 03:23:44.900247 containerd[1466]: time="2025-04-30T03:23:44.898894548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 03:23:44.900247 containerd[1466]: time="2025-04-30T03:23:44.898908845Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 03:23:44.900247 containerd[1466]: time="2025-04-30T03:23:44.898949231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 03:23:44.900247 containerd[1466]: time="2025-04-30T03:23:44.898961835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 03:23:44.900247 containerd[1466]: time="2025-04-30T03:23:44.898972435Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 03:23:44.900247 containerd[1466]: time="2025-04-30T03:23:44.899041354Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 03:23:44.900247 containerd[1466]: time="2025-04-30T03:23:44.899059838Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 03:23:44.900247 containerd[1466]: time="2025-04-30T03:23:44.899071150Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 03:23:44.900247 containerd[1466]: time="2025-04-30T03:23:44.899082621Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 03:23:44.900247 containerd[1466]: time="2025-04-30T03:23:44.899092239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 03:23:44.900247 containerd[1466]: time="2025-04-30T03:23:44.899124470Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 03:23:44.900247 containerd[1466]: time="2025-04-30T03:23:44.899137715Z" level=info msg="NRI interface is disabled by configuration." Apr 30 03:23:44.900247 containerd[1466]: time="2025-04-30T03:23:44.899149457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 03:23:44.900495 containerd[1466]: time="2025-04-30T03:23:44.899410957Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 03:23:44.900495 containerd[1466]: time="2025-04-30T03:23:44.899485797Z" level=info msg="Connect containerd service" Apr 30 03:23:44.900495 containerd[1466]: time="2025-04-30T03:23:44.899529740Z" level=info msg="using legacy CRI server" Apr 30 03:23:44.900495 containerd[1466]: time="2025-04-30T03:23:44.899535941Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 03:23:44.900495 containerd[1466]: time="2025-04-30T03:23:44.899619408Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 03:23:44.901067 containerd[1466]: time="2025-04-30T03:23:44.901043589Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:23:44.901373 containerd[1466]: time="2025-04-30T03:23:44.901314167Z" level=info msg="Start subscribing containerd event" Apr 30 03:23:44.901428 containerd[1466]: time="2025-04-30T03:23:44.901395710Z" level=info msg="Start recovering state" Apr 30 03:23:44.901520 containerd[1466]: time="2025-04-30T03:23:44.901491359Z" level=info msg="Start event monitor" Apr 30 03:23:44.901562 containerd[1466]: time="2025-04-30T03:23:44.901536604Z" level=info msg="Start snapshots syncer" Apr 30 03:23:44.901562 containerd[1466]: time="2025-04-30T03:23:44.901552845Z" level=info msg="Start cni network conf syncer for default" Apr 30 03:23:44.901797 containerd[1466]: time="2025-04-30T03:23:44.901748061Z" level=info msg="Start streaming server" Apr 30 03:23:44.902019 containerd[1466]: time="2025-04-30T03:23:44.901701764Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 03:23:44.902019 containerd[1466]: time="2025-04-30T03:23:44.901982821Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 03:23:44.902130 containerd[1466]: time="2025-04-30T03:23:44.902116893Z" level=info msg="containerd successfully booted in 0.095376s" Apr 30 03:23:44.917526 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 03:23:44.919007 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 03:23:44.920442 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 03:23:45.077248 tar[1452]: linux-amd64/README.md Apr 30 03:23:45.092178 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 03:23:45.399611 systemd-networkd[1380]: eth0: Gained IPv6LL Apr 30 03:23:45.403928 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 03:23:45.406065 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 03:23:45.420303 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 30 03:23:45.423613 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:23:45.426618 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 03:23:45.455917 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 03:23:45.457927 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 30 03:23:45.458197 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 30 03:23:45.462102 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 03:23:46.158669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:23:46.160698 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 03:23:46.162046 systemd[1]: Startup finished in 1.404s (kernel) + 8.019s (initrd) + 4.450s (userspace) = 13.875s. Apr 30 03:23:46.164891 (kubelet)[1551]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:23:46.591793 kubelet[1551]: E0430 03:23:46.591709 1551 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:23:46.595740 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:23:46.596016 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:23:47.210464 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 03:23:47.211799 systemd[1]: Started sshd@0-10.0.0.59:22-10.0.0.1:58274.service - OpenSSH per-connection server daemon (10.0.0.1:58274). Apr 30 03:23:47.260133 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 58274 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:23:47.262645 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:23:47.271488 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 03:23:47.283289 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 03:23:47.285654 systemd-logind[1446]: New session 1 of user core. Apr 30 03:23:47.296583 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 03:23:47.313222 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 03:23:47.316343 (systemd)[1568]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 03:23:47.427527 systemd[1568]: Queued start job for default target default.target. Apr 30 03:23:47.437447 systemd[1568]: Created slice app.slice - User Application Slice. Apr 30 03:23:47.437482 systemd[1568]: Reached target paths.target - Paths. Apr 30 03:23:47.437502 systemd[1568]: Reached target timers.target - Timers. Apr 30 03:23:47.439327 systemd[1568]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 03:23:47.454088 systemd[1568]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 03:23:47.454236 systemd[1568]: Reached target sockets.target - Sockets. Apr 30 03:23:47.454252 systemd[1568]: Reached target basic.target - Basic System. Apr 30 03:23:47.454292 systemd[1568]: Reached target default.target - Main User Target. Apr 30 03:23:47.454330 systemd[1568]: Startup finished in 130ms. Apr 30 03:23:47.454785 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 03:23:47.456766 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 03:23:47.520305 systemd[1]: Started sshd@1-10.0.0.59:22-10.0.0.1:58278.service - OpenSSH per-connection server daemon (10.0.0.1:58278). Apr 30 03:23:47.575097 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 58278 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:23:47.577163 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:23:47.581807 systemd-logind[1446]: New session 2 of user core. Apr 30 03:23:47.597106 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 03:23:47.655846 sshd[1579]: pam_unix(sshd:session): session closed for user core Apr 30 03:23:47.678445 systemd[1]: sshd@1-10.0.0.59:22-10.0.0.1:58278.service: Deactivated successfully. Apr 30 03:23:47.680453 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 03:23:47.682307 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. Apr 30 03:23:47.689236 systemd[1]: Started sshd@2-10.0.0.59:22-10.0.0.1:58284.service - OpenSSH per-connection server daemon (10.0.0.1:58284). Apr 30 03:23:47.690407 systemd-logind[1446]: Removed session 2. Apr 30 03:23:47.722593 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 58284 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:23:47.724278 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:23:47.728991 systemd-logind[1446]: New session 3 of user core. Apr 30 03:23:47.737091 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 03:23:47.790601 sshd[1586]: pam_unix(sshd:session): session closed for user core Apr 30 03:23:47.805892 systemd[1]: sshd@2-10.0.0.59:22-10.0.0.1:58284.service: Deactivated successfully. Apr 30 03:23:47.808640 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 03:23:47.811030 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. Apr 30 03:23:47.824366 systemd[1]: Started sshd@3-10.0.0.59:22-10.0.0.1:58290.service - OpenSSH per-connection server daemon (10.0.0.1:58290). Apr 30 03:23:47.825451 systemd-logind[1446]: Removed session 3. Apr 30 03:23:47.857050 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 58290 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:23:47.858901 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:23:47.863027 systemd-logind[1446]: New session 4 of user core. Apr 30 03:23:47.874131 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 03:23:47.932101 sshd[1593]: pam_unix(sshd:session): session closed for user core Apr 30 03:23:47.943882 systemd[1]: sshd@3-10.0.0.59:22-10.0.0.1:58290.service: Deactivated successfully. Apr 30 03:23:47.945830 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 03:23:47.947441 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. Apr 30 03:23:47.948973 systemd[1]: Started sshd@4-10.0.0.59:22-10.0.0.1:58296.service - OpenSSH per-connection server daemon (10.0.0.1:58296). Apr 30 03:23:47.949926 systemd-logind[1446]: Removed session 4. Apr 30 03:23:47.986551 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 58296 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:23:47.988430 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:23:47.993048 systemd-logind[1446]: New session 5 of user core. Apr 30 03:23:48.013303 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 03:23:48.082241 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 03:23:48.082794 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:23:48.102321 sudo[1603]: pam_unix(sudo:session): session closed for user root Apr 30 03:23:48.105444 sshd[1600]: pam_unix(sshd:session): session closed for user core Apr 30 03:23:48.118287 systemd[1]: sshd@4-10.0.0.59:22-10.0.0.1:58296.service: Deactivated successfully. Apr 30 03:23:48.120490 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 03:23:48.122367 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. Apr 30 03:23:48.134252 systemd[1]: Started sshd@5-10.0.0.59:22-10.0.0.1:58302.service - OpenSSH per-connection server daemon (10.0.0.1:58302). Apr 30 03:23:48.135287 systemd-logind[1446]: Removed session 5. Apr 30 03:23:48.169114 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 58302 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:23:48.171294 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:23:48.179204 systemd-logind[1446]: New session 6 of user core. Apr 30 03:23:48.195137 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 03:23:48.256727 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 03:23:48.257251 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:23:48.261989 sudo[1612]: pam_unix(sudo:session): session closed for user root Apr 30 03:23:48.269536 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 03:23:48.269915 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:23:48.290294 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 03:23:48.292238 auditctl[1615]: No rules Apr 30 03:23:48.293762 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 03:23:48.294106 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 03:23:48.296620 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:23:48.344594 augenrules[1633]: No rules Apr 30 03:23:48.346608 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:23:48.348058 sudo[1611]: pam_unix(sudo:session): session closed for user root Apr 30 03:23:48.350033 sshd[1608]: pam_unix(sshd:session): session closed for user core Apr 30 03:23:48.366984 systemd[1]: sshd@5-10.0.0.59:22-10.0.0.1:58302.service: Deactivated successfully. Apr 30 03:23:48.368783 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 03:23:48.370256 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. Apr 30 03:23:48.379267 systemd[1]: Started sshd@6-10.0.0.59:22-10.0.0.1:58316.service - OpenSSH per-connection server daemon (10.0.0.1:58316). Apr 30 03:23:48.380261 systemd-logind[1446]: Removed session 6. Apr 30 03:23:48.411845 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 58316 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:23:48.414229 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:23:48.420351 systemd-logind[1446]: New session 7 of user core. Apr 30 03:23:48.438183 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 03:23:48.495256 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 03:23:48.495753 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:23:49.276290 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 03:23:49.276502 (dockerd)[1662]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 03:23:50.105428 dockerd[1662]: time="2025-04-30T03:23:50.105283238Z" level=info msg="Starting up" Apr 30 03:23:50.973155 dockerd[1662]: time="2025-04-30T03:23:50.973088573Z" level=info msg="Loading containers: start." Apr 30 03:23:51.100980 kernel: Initializing XFRM netlink socket Apr 30 03:23:51.197776 systemd-networkd[1380]: docker0: Link UP Apr 30 03:23:51.229822 dockerd[1662]: time="2025-04-30T03:23:51.229695804Z" level=info msg="Loading containers: done." Apr 30 03:23:51.252417 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck806122307-merged.mount: Deactivated successfully. Apr 30 03:23:51.255624 dockerd[1662]: time="2025-04-30T03:23:51.255581527Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 03:23:51.255722 dockerd[1662]: time="2025-04-30T03:23:51.255707599Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 03:23:51.255883 dockerd[1662]: time="2025-04-30T03:23:51.255857783Z" level=info msg="Daemon has completed initialization" Apr 30 03:23:51.296129 dockerd[1662]: time="2025-04-30T03:23:51.296047760Z" level=info msg="API listen on /run/docker.sock" Apr 30 03:23:51.296295 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 03:23:52.366514 containerd[1466]: time="2025-04-30T03:23:52.366451810Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" Apr 30 03:23:53.064912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2224918967.mount: Deactivated successfully. Apr 30 03:23:54.357491 containerd[1466]: time="2025-04-30T03:23:54.357414281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:54.358438 containerd[1466]: time="2025-04-30T03:23:54.358379642Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" Apr 30 03:23:54.359635 containerd[1466]: time="2025-04-30T03:23:54.359594745Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:54.362426 containerd[1466]: time="2025-04-30T03:23:54.362374626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:54.363585 containerd[1466]: time="2025-04-30T03:23:54.363543979Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 1.997051765s" Apr 30 03:23:54.363585 containerd[1466]: time="2025-04-30T03:23:54.363583964Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" Apr 30 03:23:54.364682 containerd[1466]: time="2025-04-30T03:23:54.364594953Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" Apr 30 03:23:56.049019 containerd[1466]: time="2025-04-30T03:23:56.048905006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:56.049639 containerd[1466]: time="2025-04-30T03:23:56.049526009Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" Apr 30 03:23:56.050858 containerd[1466]: time="2025-04-30T03:23:56.050817974Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:56.053762 containerd[1466]: time="2025-04-30T03:23:56.053702060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:56.057747 containerd[1466]: time="2025-04-30T03:23:56.057700049Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 1.693058577s" Apr 30 03:23:56.057814 containerd[1466]: time="2025-04-30T03:23:56.057747353Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" Apr 30 03:23:56.058315 containerd[1466]: time="2025-04-30T03:23:56.058257928Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" Apr 30 03:23:56.747965 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 03:23:56.756103 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:23:56.990557 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:23:56.997636 (kubelet)[1875]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:23:57.407761 kubelet[1875]: E0430 03:23:57.407684 1875 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:23:57.415448 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:23:57.415771 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:23:58.369780 containerd[1466]: time="2025-04-30T03:23:58.369702587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:58.370535 containerd[1466]: time="2025-04-30T03:23:58.370486648Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" Apr 30 03:23:58.371773 containerd[1466]: time="2025-04-30T03:23:58.371740934Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:58.374567 containerd[1466]: time="2025-04-30T03:23:58.374542816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:23:58.375540 containerd[1466]: time="2025-04-30T03:23:58.375500110Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 2.317206142s" Apr 30 03:23:58.375596 containerd[1466]: time="2025-04-30T03:23:58.375543934Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" Apr 30 03:23:58.376203 containerd[1466]: time="2025-04-30T03:23:58.376160524Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" Apr 30 03:23:59.768489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2717924431.mount: Deactivated successfully. Apr 30 03:24:00.680188 containerd[1466]: time="2025-04-30T03:24:00.680045580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:00.691826 containerd[1466]: time="2025-04-30T03:24:00.691788013Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" Apr 30 03:24:00.693557 containerd[1466]: time="2025-04-30T03:24:00.693514515Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:00.702885 containerd[1466]: time="2025-04-30T03:24:00.702802470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:00.703595 containerd[1466]: time="2025-04-30T03:24:00.703536013Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 2.327334542s" Apr 30 03:24:00.703595 containerd[1466]: time="2025-04-30T03:24:00.703581703Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" Apr 30 03:24:00.704343 containerd[1466]: time="2025-04-30T03:24:00.704311120Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Apr 30 03:24:01.631789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1491979130.mount: Deactivated successfully. Apr 30 03:24:03.747553 containerd[1466]: time="2025-04-30T03:24:03.747471815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:03.748566 containerd[1466]: time="2025-04-30T03:24:03.748522545Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Apr 30 03:24:03.750058 containerd[1466]: time="2025-04-30T03:24:03.749990799Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:03.754032 containerd[1466]: time="2025-04-30T03:24:03.753961813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:03.757084 containerd[1466]: time="2025-04-30T03:24:03.756892272Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.052534738s" Apr 30 03:24:03.757084 containerd[1466]: time="2025-04-30T03:24:03.756990356Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Apr 30 03:24:03.758414 containerd[1466]: time="2025-04-30T03:24:03.758333564Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 03:24:04.311803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4223603408.mount: Deactivated successfully. Apr 30 03:24:04.319000 containerd[1466]: time="2025-04-30T03:24:04.318943705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:04.319692 containerd[1466]: time="2025-04-30T03:24:04.319634341Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Apr 30 03:24:04.320861 containerd[1466]: time="2025-04-30T03:24:04.320823698Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:04.323394 containerd[1466]: time="2025-04-30T03:24:04.323341469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:04.324428 containerd[1466]: time="2025-04-30T03:24:04.324380111Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 565.997006ms" Apr 30 03:24:04.324428 containerd[1466]: time="2025-04-30T03:24:04.324412711Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 30 03:24:04.325044 containerd[1466]: time="2025-04-30T03:24:04.325011493Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Apr 30 03:24:04.998410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1596034834.mount: Deactivated successfully. Apr 30 03:24:07.498092 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 03:24:07.527264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:24:07.760793 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:24:07.765841 (kubelet)[2012]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:24:08.051270 kubelet[2012]: E0430 03:24:08.028110 2012 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:24:08.033194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:24:08.033490 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:24:08.452011 containerd[1466]: time="2025-04-30T03:24:08.451771163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:08.452729 containerd[1466]: time="2025-04-30T03:24:08.452651482Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Apr 30 03:24:08.454164 containerd[1466]: time="2025-04-30T03:24:08.454119337Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:08.457826 containerd[1466]: time="2025-04-30T03:24:08.457782968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:08.459640 containerd[1466]: time="2025-04-30T03:24:08.459581233Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.134532454s" Apr 30 03:24:08.459714 containerd[1466]: time="2025-04-30T03:24:08.459638744Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Apr 30 03:24:10.762896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:24:10.773179 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:24:10.805294 systemd[1]: Reloading requested from client PID 2052 ('systemctl') (unit session-7.scope)... Apr 30 03:24:10.805317 systemd[1]: Reloading... Apr 30 03:24:10.905948 zram_generator::config[2091]: No configuration found. Apr 30 03:24:11.247770 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:24:11.342943 systemd[1]: Reloading finished in 537 ms. Apr 30 03:24:11.399507 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 03:24:11.399607 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 03:24:11.399885 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:24:11.402462 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:24:11.585108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:24:11.591451 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:24:11.638646 kubelet[2140]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:24:11.638646 kubelet[2140]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 03:24:11.638646 kubelet[2140]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:24:11.639203 kubelet[2140]: I0430 03:24:11.638734 2140 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:24:12.377612 kubelet[2140]: I0430 03:24:12.377540 2140 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 03:24:12.377612 kubelet[2140]: I0430 03:24:12.377583 2140 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:24:12.377902 kubelet[2140]: I0430 03:24:12.377870 2140 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 03:24:12.407044 kubelet[2140]: E0430 03:24:12.406974 2140 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:24:12.409715 kubelet[2140]: I0430 03:24:12.409653 2140 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:24:12.420817 kubelet[2140]: E0430 03:24:12.420771 2140 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 03:24:12.420817 kubelet[2140]: I0430 03:24:12.420804 2140 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 03:24:12.426460 kubelet[2140]: I0430 03:24:12.426417 2140 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:24:12.426708 kubelet[2140]: I0430 03:24:12.426650 2140 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:24:12.427697 kubelet[2140]: I0430 03:24:12.426691 2140 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 03:24:12.427697 kubelet[2140]: I0430 03:24:12.427209 2140 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:24:12.427697 kubelet[2140]: I0430 03:24:12.427220 2140 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 03:24:12.427697 kubelet[2140]: I0430 03:24:12.427390 2140 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:24:12.432530 kubelet[2140]: I0430 03:24:12.432466 2140 kubelet.go:446] "Attempting to sync node with API server" Apr 30 03:24:12.432530 kubelet[2140]: I0430 03:24:12.432511 2140 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:24:12.432530 kubelet[2140]: I0430 03:24:12.432537 2140 kubelet.go:352] "Adding apiserver pod source" Apr 30 03:24:12.432717 kubelet[2140]: I0430 03:24:12.432549 2140 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:24:12.438850 kubelet[2140]: W0430 03:24:12.438716 2140 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Apr 30 03:24:12.438850 kubelet[2140]: E0430 03:24:12.438794 2140 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:24:12.439773 kubelet[2140]: I0430 03:24:12.439736 2140 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:24:12.440204 kubelet[2140]: I0430 03:24:12.440175 2140 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:24:12.441027 kubelet[2140]: W0430 03:24:12.440911 2140 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Apr 30 03:24:12.441027 kubelet[2140]: E0430 03:24:12.441023 2140 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:24:12.455055 kubelet[2140]: W0430 03:24:12.454991 2140 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 03:24:12.457555 kubelet[2140]: I0430 03:24:12.457532 2140 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 03:24:12.457597 kubelet[2140]: I0430 03:24:12.457574 2140 server.go:1287] "Started kubelet" Apr 30 03:24:12.459131 kubelet[2140]: I0430 03:24:12.459063 2140 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:24:12.459785 kubelet[2140]: I0430 03:24:12.459394 2140 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:24:12.459785 kubelet[2140]: I0430 03:24:12.459003 2140 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:24:12.459984 kubelet[2140]: I0430 03:24:12.459907 2140 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:24:12.460288 kubelet[2140]: I0430 03:24:12.460255 2140 server.go:490] "Adding debug handlers to kubelet server" Apr 30 03:24:12.461230 kubelet[2140]: I0430 03:24:12.461108 2140 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 03:24:12.461974 kubelet[2140]: E0430 03:24:12.461927 2140 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 03:24:12.462028 kubelet[2140]: I0430 03:24:12.461980 2140 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 03:24:12.463552 kubelet[2140]: I0430 03:24:12.462140 2140 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:24:12.463552 kubelet[2140]: I0430 03:24:12.462198 2140 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:24:12.463552 kubelet[2140]: W0430 03:24:12.462768 2140 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Apr 30 03:24:12.463552 kubelet[2140]: E0430 03:24:12.462952 2140 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:24:12.463552 kubelet[2140]: I0430 03:24:12.463175 2140 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:24:12.463552 kubelet[2140]: I0430 03:24:12.463254 2140 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:24:12.463552 kubelet[2140]: E0430 03:24:12.463457 2140 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="200ms" Apr 30 03:24:12.463783 kubelet[2140]: E0430 03:24:12.463667 2140 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:24:12.464378 kubelet[2140]: I0430 03:24:12.464340 2140 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:24:12.464812 kubelet[2140]: E0430 03:24:12.463349 2140 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.59:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.59:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183afab30b71365b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-04-30 03:24:12.457547355 +0000 UTC m=+0.861396715,LastTimestamp:2025-04-30 03:24:12.457547355 +0000 UTC m=+0.861396715,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 30 03:24:12.478052 kubelet[2140]: I0430 03:24:12.477982 2140 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:24:12.481030 kubelet[2140]: I0430 03:24:12.480932 2140 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:24:12.481030 kubelet[2140]: I0430 03:24:12.480965 2140 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 03:24:12.481030 kubelet[2140]: I0430 03:24:12.480988 2140 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 03:24:12.481030 kubelet[2140]: I0430 03:24:12.480999 2140 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 03:24:12.481156 kubelet[2140]: E0430 03:24:12.481072 2140 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:24:12.485320 kubelet[2140]: W0430 03:24:12.484825 2140 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Apr 30 03:24:12.485320 kubelet[2140]: E0430 03:24:12.484892 2140 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:24:12.486192 kubelet[2140]: I0430 03:24:12.486163 2140 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 03:24:12.486192 kubelet[2140]: I0430 03:24:12.486188 2140 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 03:24:12.486273 kubelet[2140]: I0430 03:24:12.486210 2140 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:24:12.562731 kubelet[2140]: E0430 03:24:12.562666 2140 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 03:24:12.582187 kubelet[2140]: E0430 03:24:12.582103 2140 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 03:24:12.663578 kubelet[2140]: E0430 03:24:12.663388 2140 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 03:24:12.665067 kubelet[2140]: E0430 03:24:12.664996 2140 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="400ms" Apr 30 03:24:12.763680 kubelet[2140]: E0430 03:24:12.763586 2140 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 03:24:12.782962 kubelet[2140]: E0430 03:24:12.782850 2140 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 03:24:12.814147 kubelet[2140]: I0430 03:24:12.814077 2140 policy_none.go:49] "None policy: Start" Apr 30 03:24:12.814147 kubelet[2140]: I0430 03:24:12.814136 2140 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 03:24:12.814147 kubelet[2140]: I0430 03:24:12.814152 2140 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:24:12.864299 kubelet[2140]: E0430 03:24:12.864220 2140 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 03:24:12.965445 kubelet[2140]: E0430 03:24:12.965257 2140 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 03:24:13.066002 kubelet[2140]: E0430 03:24:13.065942 2140 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 03:24:13.066446 kubelet[2140]: E0430 03:24:13.066398 2140 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="800ms" Apr 30 03:24:13.078733 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 03:24:13.094861 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 03:24:13.098632 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 03:24:13.108169 kubelet[2140]: I0430 03:24:13.108038 2140 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:24:13.109149 kubelet[2140]: I0430 03:24:13.108283 2140 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 03:24:13.109149 kubelet[2140]: I0430 03:24:13.108304 2140 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:24:13.109149 kubelet[2140]: I0430 03:24:13.108564 2140 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:24:13.109406 kubelet[2140]: E0430 03:24:13.109224 2140 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 03:24:13.109406 kubelet[2140]: E0430 03:24:13.109259 2140 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 30 03:24:13.193891 systemd[1]: Created slice kubepods-burstable-podfc05922f49821de855e90cec6c18996b.slice - libcontainer container kubepods-burstable-podfc05922f49821de855e90cec6c18996b.slice. Apr 30 03:24:13.211213 kubelet[2140]: I0430 03:24:13.210644 2140 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Apr 30 03:24:13.211358 kubelet[2140]: E0430 03:24:13.211235 2140 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Apr 30 03:24:13.217569 kubelet[2140]: E0430 03:24:13.217056 2140 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 30 03:24:13.219749 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. Apr 30 03:24:13.222763 kubelet[2140]: E0430 03:24:13.222641 2140 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 30 03:24:13.225858 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. Apr 30 03:24:13.227908 kubelet[2140]: E0430 03:24:13.227690 2140 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 30 03:24:13.265219 kubelet[2140]: I0430 03:24:13.265151 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:24:13.265219 kubelet[2140]: I0430 03:24:13.265201 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" Apr 30 03:24:13.265219 kubelet[2140]: I0430 03:24:13.265220 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc05922f49821de855e90cec6c18996b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fc05922f49821de855e90cec6c18996b\") " pod="kube-system/kube-apiserver-localhost" Apr 30 03:24:13.265219 kubelet[2140]: I0430 03:24:13.265236 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc05922f49821de855e90cec6c18996b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fc05922f49821de855e90cec6c18996b\") " pod="kube-system/kube-apiserver-localhost" Apr 30 03:24:13.265528 kubelet[2140]: I0430 03:24:13.265250 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:24:13.265528 kubelet[2140]: I0430 03:24:13.265270 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:24:13.265528 kubelet[2140]: I0430 03:24:13.265287 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:24:13.265528 kubelet[2140]: I0430 03:24:13.265304 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:24:13.265528 kubelet[2140]: I0430 03:24:13.265319 2140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc05922f49821de855e90cec6c18996b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fc05922f49821de855e90cec6c18996b\") " pod="kube-system/kube-apiserver-localhost" Apr 30 03:24:13.413312 kubelet[2140]: I0430 03:24:13.413270 2140 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Apr 30 03:24:13.413737 kubelet[2140]: E0430 03:24:13.413695 2140 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Apr 30 03:24:13.518078 kubelet[2140]: E0430 03:24:13.518024 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:13.518976 containerd[1466]: time="2025-04-30T03:24:13.518902586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fc05922f49821de855e90cec6c18996b,Namespace:kube-system,Attempt:0,}" Apr 30 03:24:13.523128 kubelet[2140]: E0430 03:24:13.523074 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:13.523602 containerd[1466]: time="2025-04-30T03:24:13.523559940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" Apr 30 03:24:13.528963 kubelet[2140]: E0430 03:24:13.528909 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:13.529484 containerd[1466]: time="2025-04-30T03:24:13.529448802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" Apr 30 03:24:13.627863 kubelet[2140]: W0430 03:24:13.627798 2140 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Apr 30 03:24:13.627863 kubelet[2140]: E0430 03:24:13.627862 2140 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:24:13.815710 kubelet[2140]: I0430 03:24:13.815565 2140 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Apr 30 03:24:13.816170 kubelet[2140]: E0430 03:24:13.815960 2140 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Apr 30 03:24:13.867931 kubelet[2140]: E0430 03:24:13.867848 2140 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="1.6s" Apr 30 03:24:13.872334 kubelet[2140]: W0430 03:24:13.872264 2140 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Apr 30 03:24:13.872394 kubelet[2140]: E0430 03:24:13.872372 2140 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:24:13.956962 kubelet[2140]: W0430 03:24:13.956860 2140 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Apr 30 03:24:13.956962 kubelet[2140]: E0430 03:24:13.956966 2140 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:24:14.014472 kubelet[2140]: W0430 03:24:14.014375 2140 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Apr 30 03:24:14.014472 kubelet[2140]: E0430 03:24:14.014472 2140 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:24:14.092987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount693797279.mount: Deactivated successfully. Apr 30 03:24:14.101372 containerd[1466]: time="2025-04-30T03:24:14.101293512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:24:14.103975 containerd[1466]: time="2025-04-30T03:24:14.103883371Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:24:14.104885 containerd[1466]: time="2025-04-30T03:24:14.104828051Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:24:14.106588 containerd[1466]: time="2025-04-30T03:24:14.106550495Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:24:14.107436 containerd[1466]: time="2025-04-30T03:24:14.107399737Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:24:14.107736 containerd[1466]: time="2025-04-30T03:24:14.107689049Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:24:14.108781 containerd[1466]: time="2025-04-30T03:24:14.108737350Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 30 03:24:14.109647 containerd[1466]: time="2025-04-30T03:24:14.109613732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:24:14.110624 containerd[1466]: time="2025-04-30T03:24:14.110589783Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 586.957984ms" Apr 30 03:24:14.114940 containerd[1466]: time="2025-04-30T03:24:14.114873502Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 595.80586ms" Apr 30 03:24:14.115682 containerd[1466]: time="2025-04-30T03:24:14.115635720Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 586.111399ms" Apr 30 03:24:14.441699 kubelet[2140]: E0430 03:24:14.441219 2140 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Apr 30 03:24:14.500255 containerd[1466]: time="2025-04-30T03:24:14.499776526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:24:14.500255 containerd[1466]: time="2025-04-30T03:24:14.499867962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:24:14.500255 containerd[1466]: time="2025-04-30T03:24:14.499883537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:14.500255 containerd[1466]: time="2025-04-30T03:24:14.500062799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:14.500255 containerd[1466]: time="2025-04-30T03:24:14.499752105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:24:14.500255 containerd[1466]: time="2025-04-30T03:24:14.499862636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:24:14.500255 containerd[1466]: time="2025-04-30T03:24:14.499884119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:14.500255 containerd[1466]: time="2025-04-30T03:24:14.500092184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:14.505163 containerd[1466]: time="2025-04-30T03:24:14.503767062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:24:14.505163 containerd[1466]: time="2025-04-30T03:24:14.504997192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:24:14.505163 containerd[1466]: time="2025-04-30T03:24:14.505012888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:14.505163 containerd[1466]: time="2025-04-30T03:24:14.505096321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:14.546196 systemd[1]: Started cri-containerd-3ec0ecb1a925f10f59a6f2c8d17418296b91db3935fbc2dabee78435e7e8f779.scope - libcontainer container 3ec0ecb1a925f10f59a6f2c8d17418296b91db3935fbc2dabee78435e7e8f779. Apr 30 03:24:14.617931 kubelet[2140]: I0430 03:24:14.617847 2140 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Apr 30 03:24:14.618326 kubelet[2140]: E0430 03:24:14.618285 2140 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Apr 30 03:24:14.618441 systemd[1]: Started cri-containerd-65bccbfa629ffefbd3df6af2a30ca5273dd97c75371977e66515485f3f4c9671.scope - libcontainer container 65bccbfa629ffefbd3df6af2a30ca5273dd97c75371977e66515485f3f4c9671. Apr 30 03:24:14.620881 systemd[1]: Started cri-containerd-6bf619bde24fbfc459e866f6ae8acba49422212501b2cbb4c9a7118bf94e36a3.scope - libcontainer container 6bf619bde24fbfc459e866f6ae8acba49422212501b2cbb4c9a7118bf94e36a3. Apr 30 03:24:14.672296 containerd[1466]: time="2025-04-30T03:24:14.672131569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ec0ecb1a925f10f59a6f2c8d17418296b91db3935fbc2dabee78435e7e8f779\"" Apr 30 03:24:14.674451 kubelet[2140]: E0430 03:24:14.674391 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:14.679121 containerd[1466]: time="2025-04-30T03:24:14.678902849Z" level=info msg="CreateContainer within sandbox \"3ec0ecb1a925f10f59a6f2c8d17418296b91db3935fbc2dabee78435e7e8f779\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 03:24:14.679501 containerd[1466]: time="2025-04-30T03:24:14.679041262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fc05922f49821de855e90cec6c18996b,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bf619bde24fbfc459e866f6ae8acba49422212501b2cbb4c9a7118bf94e36a3\"" Apr 30 03:24:14.682450 kubelet[2140]: E0430 03:24:14.682428 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:14.685623 containerd[1466]: time="2025-04-30T03:24:14.685503092Z" level=info msg="CreateContainer within sandbox \"6bf619bde24fbfc459e866f6ae8acba49422212501b2cbb4c9a7118bf94e36a3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 03:24:14.698075 containerd[1466]: time="2025-04-30T03:24:14.697992798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"65bccbfa629ffefbd3df6af2a30ca5273dd97c75371977e66515485f3f4c9671\"" Apr 30 03:24:14.698822 kubelet[2140]: E0430 03:24:14.698696 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:14.700113 containerd[1466]: time="2025-04-30T03:24:14.700081046Z" level=info msg="CreateContainer within sandbox \"65bccbfa629ffefbd3df6af2a30ca5273dd97c75371977e66515485f3f4c9671\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 03:24:14.707313 containerd[1466]: time="2025-04-30T03:24:14.707282499Z" level=info msg="CreateContainer within sandbox \"3ec0ecb1a925f10f59a6f2c8d17418296b91db3935fbc2dabee78435e7e8f779\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4eb5be30f017f2c554341883b9530e9d8a4119be0aa3a2b5d3158ec88a2ead9a\"" Apr 30 03:24:14.707744 containerd[1466]: time="2025-04-30T03:24:14.707723101Z" level=info msg="StartContainer for \"4eb5be30f017f2c554341883b9530e9d8a4119be0aa3a2b5d3158ec88a2ead9a\"" Apr 30 03:24:14.720855 containerd[1466]: time="2025-04-30T03:24:14.720798421Z" level=info msg="CreateContainer within sandbox \"6bf619bde24fbfc459e866f6ae8acba49422212501b2cbb4c9a7118bf94e36a3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f6896afaea6e1cdb93a118ca6ba89706d29ebaf45be7fedc9000bf00460b3495\"" Apr 30 03:24:14.721484 containerd[1466]: time="2025-04-30T03:24:14.721422127Z" level=info msg="StartContainer for \"f6896afaea6e1cdb93a118ca6ba89706d29ebaf45be7fedc9000bf00460b3495\"" Apr 30 03:24:14.722840 containerd[1466]: time="2025-04-30T03:24:14.722803088Z" level=info msg="CreateContainer within sandbox \"65bccbfa629ffefbd3df6af2a30ca5273dd97c75371977e66515485f3f4c9671\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"61f5433db850af5b9d30e016d5c85f9bddef388d7f7401b70e936e1b671cf589\"" Apr 30 03:24:14.724935 containerd[1466]: time="2025-04-30T03:24:14.723319791Z" level=info msg="StartContainer for \"61f5433db850af5b9d30e016d5c85f9bddef388d7f7401b70e936e1b671cf589\"" Apr 30 03:24:14.742157 systemd[1]: Started cri-containerd-4eb5be30f017f2c554341883b9530e9d8a4119be0aa3a2b5d3158ec88a2ead9a.scope - libcontainer container 4eb5be30f017f2c554341883b9530e9d8a4119be0aa3a2b5d3158ec88a2ead9a. Apr 30 03:24:14.765052 systemd[1]: Started cri-containerd-61f5433db850af5b9d30e016d5c85f9bddef388d7f7401b70e936e1b671cf589.scope - libcontainer container 61f5433db850af5b9d30e016d5c85f9bddef388d7f7401b70e936e1b671cf589. Apr 30 03:24:14.766526 systemd[1]: Started cri-containerd-f6896afaea6e1cdb93a118ca6ba89706d29ebaf45be7fedc9000bf00460b3495.scope - libcontainer container f6896afaea6e1cdb93a118ca6ba89706d29ebaf45be7fedc9000bf00460b3495. Apr 30 03:24:14.826028 containerd[1466]: time="2025-04-30T03:24:14.825873402Z" level=info msg="StartContainer for \"4eb5be30f017f2c554341883b9530e9d8a4119be0aa3a2b5d3158ec88a2ead9a\" returns successfully" Apr 30 03:24:14.826028 containerd[1466]: time="2025-04-30T03:24:14.825988106Z" level=info msg="StartContainer for \"f6896afaea6e1cdb93a118ca6ba89706d29ebaf45be7fedc9000bf00460b3495\" returns successfully" Apr 30 03:24:14.835612 containerd[1466]: time="2025-04-30T03:24:14.835559718Z" level=info msg="StartContainer for \"61f5433db850af5b9d30e016d5c85f9bddef388d7f7401b70e936e1b671cf589\" returns successfully" Apr 30 03:24:15.492634 kubelet[2140]: E0430 03:24:15.492527 2140 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 30 03:24:15.492634 kubelet[2140]: E0430 03:24:15.492640 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:15.493859 kubelet[2140]: E0430 03:24:15.493838 2140 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 30 03:24:15.493996 kubelet[2140]: E0430 03:24:15.493968 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:15.496154 kubelet[2140]: E0430 03:24:15.496104 2140 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 30 03:24:15.496437 kubelet[2140]: E0430 03:24:15.496393 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:16.038989 kubelet[2140]: E0430 03:24:16.038945 2140 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 30 03:24:16.220533 kubelet[2140]: I0430 03:24:16.220076 2140 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Apr 30 03:24:16.226353 kubelet[2140]: I0430 03:24:16.226306 2140 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Apr 30 03:24:16.264253 kubelet[2140]: I0430 03:24:16.264123 2140 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 30 03:24:16.275946 kubelet[2140]: E0430 03:24:16.275886 2140 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 30 03:24:16.275946 kubelet[2140]: I0430 03:24:16.275930 2140 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 30 03:24:16.277695 kubelet[2140]: E0430 03:24:16.277656 2140 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 30 03:24:16.277695 kubelet[2140]: I0430 03:24:16.277685 2140 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 30 03:24:16.284147 kubelet[2140]: E0430 03:24:16.284073 2140 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 30 03:24:16.435194 kubelet[2140]: I0430 03:24:16.435060 2140 apiserver.go:52] "Watching apiserver" Apr 30 03:24:16.462488 kubelet[2140]: I0430 03:24:16.462465 2140 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:24:16.496591 kubelet[2140]: I0430 03:24:16.496568 2140 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 30 03:24:16.497090 kubelet[2140]: I0430 03:24:16.496612 2140 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 30 03:24:16.498304 kubelet[2140]: E0430 03:24:16.498278 2140 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 30 03:24:16.498381 kubelet[2140]: E0430 03:24:16.498278 2140 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 30 03:24:16.498434 kubelet[2140]: E0430 03:24:16.498414 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:16.498434 kubelet[2140]: E0430 03:24:16.498425 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:17.498818 kubelet[2140]: I0430 03:24:17.498763 2140 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 30 03:24:17.505703 kubelet[2140]: E0430 03:24:17.505644 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:18.500860 kubelet[2140]: E0430 03:24:18.500815 2140 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:18.537992 systemd[1]: Reloading requested from client PID 2417 ('systemctl') (unit session-7.scope)... Apr 30 03:24:18.538018 systemd[1]: Reloading... Apr 30 03:24:18.623954 zram_generator::config[2456]: No configuration found. Apr 30 03:24:18.748695 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:24:18.845324 systemd[1]: Reloading finished in 306 ms. Apr 30 03:24:18.905721 kubelet[2140]: I0430 03:24:18.905673 2140 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:24:18.905739 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:24:18.916522 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:24:18.916851 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:24:18.916906 systemd[1]: kubelet.service: Consumed 1.452s CPU time, 123.4M memory peak, 0B memory swap peak. Apr 30 03:24:18.927283 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:24:19.111634 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:24:19.123420 (kubelet)[2501]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:24:19.172556 kubelet[2501]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:24:19.172556 kubelet[2501]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 03:24:19.172556 kubelet[2501]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:24:19.173165 kubelet[2501]: I0430 03:24:19.172602 2501 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:24:19.180154 kubelet[2501]: I0430 03:24:19.180095 2501 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 03:24:19.180154 kubelet[2501]: I0430 03:24:19.180128 2501 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:24:19.180395 kubelet[2501]: I0430 03:24:19.180371 2501 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 03:24:19.181560 kubelet[2501]: I0430 03:24:19.181534 2501 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 03:24:19.183626 kubelet[2501]: I0430 03:24:19.183583 2501 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:24:19.186939 kubelet[2501]: E0430 03:24:19.186885 2501 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 03:24:19.186939 kubelet[2501]: I0430 03:24:19.186927 2501 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 03:24:19.193296 kubelet[2501]: I0430 03:24:19.193240 2501 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:24:19.193617 kubelet[2501]: I0430 03:24:19.193570 2501 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:24:19.193891 kubelet[2501]: I0430 03:24:19.193608 2501 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 03:24:19.193983 kubelet[2501]: I0430 03:24:19.193893 2501 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:24:19.193983 kubelet[2501]: I0430 03:24:19.193908 2501 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 03:24:19.193983 kubelet[2501]: I0430 03:24:19.193980 2501 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:24:19.194258 kubelet[2501]: I0430 03:24:19.194222 2501 kubelet.go:446] "Attempting to sync node with API server" Apr 30 03:24:19.194258 kubelet[2501]: I0430 03:24:19.194246 2501 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:24:19.194964 kubelet[2501]: I0430 03:24:19.194269 2501 kubelet.go:352] "Adding apiserver pod source" Apr 30 03:24:19.194964 kubelet[2501]: I0430 03:24:19.194285 2501 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:24:19.195297 kubelet[2501]: I0430 03:24:19.195256 2501 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:24:19.195629 kubelet[2501]: I0430 03:24:19.195607 2501 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:24:19.196149 kubelet[2501]: I0430 03:24:19.196134 2501 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 03:24:19.196201 kubelet[2501]: I0430 03:24:19.196168 2501 server.go:1287] "Started kubelet" Apr 30 03:24:19.198631 kubelet[2501]: I0430 03:24:19.198610 2501 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:24:19.200721 kubelet[2501]: I0430 03:24:19.200667 2501 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:24:19.202074 kubelet[2501]: I0430 03:24:19.202050 2501 server.go:490] "Adding debug handlers to kubelet server" Apr 30 03:24:19.203474 kubelet[2501]: I0430 03:24:19.203429 2501 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:24:19.203725 kubelet[2501]: I0430 03:24:19.203696 2501 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:24:19.203970 kubelet[2501]: I0430 03:24:19.203950 2501 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 03:24:19.206636 kubelet[2501]: E0430 03:24:19.206595 2501 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:24:19.207797 kubelet[2501]: I0430 03:24:19.207133 2501 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 03:24:19.207797 kubelet[2501]: I0430 03:24:19.207211 2501 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:24:19.207797 kubelet[2501]: I0430 03:24:19.207322 2501 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:24:19.208222 kubelet[2501]: E0430 03:24:19.208174 2501 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 03:24:19.210181 kubelet[2501]: I0430 03:24:19.209778 2501 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:24:19.210181 kubelet[2501]: I0430 03:24:19.210076 2501 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:24:19.216447 kubelet[2501]: I0430 03:24:19.216143 2501 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:24:19.219618 kubelet[2501]: I0430 03:24:19.218960 2501 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:24:19.220971 kubelet[2501]: I0430 03:24:19.220877 2501 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:24:19.220971 kubelet[2501]: I0430 03:24:19.220909 2501 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 03:24:19.220971 kubelet[2501]: I0430 03:24:19.220947 2501 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 03:24:19.220971 kubelet[2501]: I0430 03:24:19.220955 2501 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 03:24:19.221112 kubelet[2501]: E0430 03:24:19.221004 2501 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:24:19.252775 kubelet[2501]: I0430 03:24:19.252730 2501 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 03:24:19.252775 kubelet[2501]: I0430 03:24:19.252758 2501 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 03:24:19.252775 kubelet[2501]: I0430 03:24:19.252780 2501 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:24:19.253089 kubelet[2501]: I0430 03:24:19.253047 2501 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 03:24:19.253139 kubelet[2501]: I0430 03:24:19.253078 2501 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 03:24:19.253139 kubelet[2501]: I0430 03:24:19.253101 2501 policy_none.go:49] "None policy: Start" Apr 30 03:24:19.253139 kubelet[2501]: I0430 03:24:19.253127 2501 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 03:24:19.253204 kubelet[2501]: I0430 03:24:19.253146 2501 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:24:19.253329 kubelet[2501]: I0430 03:24:19.253294 2501 state_mem.go:75] "Updated machine memory state" Apr 30 03:24:19.258101 kubelet[2501]: I0430 03:24:19.258031 2501 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:24:19.258291 kubelet[2501]: I0430 03:24:19.258250 2501 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 03:24:19.258291 kubelet[2501]: I0430 03:24:19.258270 2501 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:24:19.258632 kubelet[2501]: I0430 03:24:19.258484 2501 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:24:19.260148 kubelet[2501]: E0430 03:24:19.260007 2501 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 03:24:19.322156 kubelet[2501]: I0430 03:24:19.322063 2501 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 30 03:24:19.323113 kubelet[2501]: I0430 03:24:19.322716 2501 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 30 03:24:19.323113 kubelet[2501]: I0430 03:24:19.322718 2501 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 30 03:24:19.339350 kubelet[2501]: E0430 03:24:19.339274 2501 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 30 03:24:19.364352 kubelet[2501]: I0430 03:24:19.364152 2501 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Apr 30 03:24:19.373200 kubelet[2501]: I0430 03:24:19.373136 2501 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Apr 30 03:24:19.373385 kubelet[2501]: I0430 03:24:19.373256 2501 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Apr 30 03:24:19.508454 kubelet[2501]: I0430 03:24:19.508384 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:24:19.508454 kubelet[2501]: I0430 03:24:19.508447 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:24:19.508664 kubelet[2501]: I0430 03:24:19.508481 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:24:19.508664 kubelet[2501]: I0430 03:24:19.508579 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc05922f49821de855e90cec6c18996b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fc05922f49821de855e90cec6c18996b\") " pod="kube-system/kube-apiserver-localhost" Apr 30 03:24:19.508716 kubelet[2501]: I0430 03:24:19.508665 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc05922f49821de855e90cec6c18996b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fc05922f49821de855e90cec6c18996b\") " pod="kube-system/kube-apiserver-localhost" Apr 30 03:24:19.508716 kubelet[2501]: I0430 03:24:19.508706 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:24:19.508764 kubelet[2501]: I0430 03:24:19.508735 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 03:24:19.508792 kubelet[2501]: I0430 03:24:19.508756 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" Apr 30 03:24:19.508792 kubelet[2501]: I0430 03:24:19.508782 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc05922f49821de855e90cec6c18996b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fc05922f49821de855e90cec6c18996b\") " pod="kube-system/kube-apiserver-localhost" Apr 30 03:24:19.548083 sudo[2537]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 03:24:19.548468 sudo[2537]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 03:24:19.640111 kubelet[2501]: E0430 03:24:19.639857 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:19.640111 kubelet[2501]: E0430 03:24:19.639868 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:19.641070 kubelet[2501]: E0430 03:24:19.640973 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:20.033272 sudo[2537]: pam_unix(sudo:session): session closed for user root Apr 30 03:24:20.195209 kubelet[2501]: I0430 03:24:20.195146 2501 apiserver.go:52] "Watching apiserver" Apr 30 03:24:20.207933 kubelet[2501]: I0430 03:24:20.207871 2501 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:24:20.235982 kubelet[2501]: E0430 03:24:20.235901 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:20.238877 kubelet[2501]: I0430 03:24:20.236365 2501 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 30 03:24:20.238877 kubelet[2501]: E0430 03:24:20.236587 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:20.241290 kubelet[2501]: E0430 03:24:20.241208 2501 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 30 03:24:20.241485 kubelet[2501]: E0430 03:24:20.241452 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:20.244775 kubelet[2501]: I0430 03:24:20.244716 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.244700245 podStartE2EDuration="3.244700245s" podCreationTimestamp="2025-04-30 03:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:24:20.234218883 +0000 UTC m=+1.104043827" watchObservedRunningTime="2025-04-30 03:24:20.244700245 +0000 UTC m=+1.114525179" Apr 30 03:24:20.253714 kubelet[2501]: I0430 03:24:20.253483 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.253468898 podStartE2EDuration="1.253468898s" podCreationTimestamp="2025-04-30 03:24:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:24:20.244657142 +0000 UTC m=+1.114482076" watchObservedRunningTime="2025-04-30 03:24:20.253468898 +0000 UTC m=+1.123293832" Apr 30 03:24:20.263671 kubelet[2501]: I0430 03:24:20.263595 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.26358422 podStartE2EDuration="1.26358422s" podCreationTimestamp="2025-04-30 03:24:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:24:20.254253569 +0000 UTC m=+1.124078503" watchObservedRunningTime="2025-04-30 03:24:20.26358422 +0000 UTC m=+1.133409144" Apr 30 03:24:21.237726 kubelet[2501]: E0430 03:24:21.237626 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:21.237726 kubelet[2501]: E0430 03:24:21.237654 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:21.237726 kubelet[2501]: E0430 03:24:21.237626 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:21.428688 sudo[1644]: pam_unix(sudo:session): session closed for user root Apr 30 03:24:21.431081 sshd[1641]: pam_unix(sshd:session): session closed for user core Apr 30 03:24:21.435285 systemd[1]: sshd@6-10.0.0.59:22-10.0.0.1:58316.service: Deactivated successfully. Apr 30 03:24:21.437440 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 03:24:21.437672 systemd[1]: session-7.scope: Consumed 5.220s CPU time, 157.0M memory peak, 0B memory swap peak. Apr 30 03:24:21.438220 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. Apr 30 03:24:21.439223 systemd-logind[1446]: Removed session 7. Apr 30 03:24:22.665567 kubelet[2501]: E0430 03:24:22.665485 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:22.712139 kubelet[2501]: E0430 03:24:22.711906 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:24.177267 kubelet[2501]: I0430 03:24:24.177208 2501 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 03:24:24.177879 containerd[1466]: time="2025-04-30T03:24:24.177705880Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 03:24:24.178255 kubelet[2501]: I0430 03:24:24.177947 2501 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 03:24:25.200986 systemd[1]: Created slice kubepods-besteffort-pod7544b32a_9b75_4adb_81c5_d4708d7fc932.slice - libcontainer container kubepods-besteffort-pod7544b32a_9b75_4adb_81c5_d4708d7fc932.slice. Apr 30 03:24:25.245474 systemd[1]: Created slice kubepods-burstable-pod354a3d62_5ab7_4b54_8289_98acde9e6e04.slice - libcontainer container kubepods-burstable-pod354a3d62_5ab7_4b54_8289_98acde9e6e04.slice. Apr 30 03:24:25.248574 kubelet[2501]: I0430 03:24:25.248514 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/354a3d62-5ab7-4b54-8289-98acde9e6e04-hubble-tls\") pod \"cilium-ghpvk\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " pod="kube-system/cilium-ghpvk" Apr 30 03:24:25.248574 kubelet[2501]: I0430 03:24:25.248555 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7544b32a-9b75-4adb-81c5-d4708d7fc932-kube-proxy\") pod \"kube-proxy-vdsl9\" (UID: \"7544b32a-9b75-4adb-81c5-d4708d7fc932\") " pod="kube-system/kube-proxy-vdsl9" Apr 30 03:24:25.248574 kubelet[2501]: I0430 03:24:25.248579 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-etc-cni-netd\") pod \"cilium-ghpvk\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " pod="kube-system/cilium-ghpvk" Apr 30 03:24:25.249210 kubelet[2501]: I0430 03:24:25.248593 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7544b32a-9b75-4adb-81c5-d4708d7fc932-xtables-lock\") pod \"kube-proxy-vdsl9\" (UID: \"7544b32a-9b75-4adb-81c5-d4708d7fc932\") " pod="kube-system/kube-proxy-vdsl9" Apr 30 03:24:25.249210 kubelet[2501]: I0430 03:24:25.248608 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-cilium-run\") pod \"cilium-ghpvk\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " pod="kube-system/cilium-ghpvk" Apr 30 03:24:25.249210 kubelet[2501]: I0430 03:24:25.248648 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/354a3d62-5ab7-4b54-8289-98acde9e6e04-clustermesh-secrets\") pod \"cilium-ghpvk\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " pod="kube-system/cilium-ghpvk" Apr 30 03:24:25.249210 kubelet[2501]: I0430 03:24:25.248666 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7544b32a-9b75-4adb-81c5-d4708d7fc932-lib-modules\") pod \"kube-proxy-vdsl9\" (UID: \"7544b32a-9b75-4adb-81c5-d4708d7fc932\") " pod="kube-system/kube-proxy-vdsl9" Apr 30 03:24:25.249210 kubelet[2501]: I0430 03:24:25.248680 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-lib-modules\") pod \"cilium-ghpvk\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " pod="kube-system/cilium-ghpvk" Apr 30 03:24:25.249380 kubelet[2501]: I0430 03:24:25.248713 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtvq4\" (UniqueName: \"kubernetes.io/projected/7544b32a-9b75-4adb-81c5-d4708d7fc932-kube-api-access-xtvq4\") pod \"kube-proxy-vdsl9\" (UID: \"7544b32a-9b75-4adb-81c5-d4708d7fc932\") " pod="kube-system/kube-proxy-vdsl9" Apr 30 03:24:25.249380 kubelet[2501]: I0430 03:24:25.248732 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-cilium-cgroup\") pod \"cilium-ghpvk\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " pod="kube-system/cilium-ghpvk" Apr 30 03:24:25.249380 kubelet[2501]: I0430 03:24:25.248820 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-hostproc\") pod \"cilium-ghpvk\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " pod="kube-system/cilium-ghpvk" Apr 30 03:24:25.249380 kubelet[2501]: I0430 03:24:25.248878 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-cni-path\") pod \"cilium-ghpvk\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " pod="kube-system/cilium-ghpvk" Apr 30 03:24:25.249380 kubelet[2501]: I0430 03:24:25.248909 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/354a3d62-5ab7-4b54-8289-98acde9e6e04-cilium-config-path\") pod \"cilium-ghpvk\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " pod="kube-system/cilium-ghpvk" Apr 30 03:24:25.249520 kubelet[2501]: I0430 03:24:25.248982 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88cwf\" (UniqueName: \"kubernetes.io/projected/354a3d62-5ab7-4b54-8289-98acde9e6e04-kube-api-access-88cwf\") pod \"cilium-ghpvk\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " pod="kube-system/cilium-ghpvk" Apr 30 03:24:25.249520 kubelet[2501]: I0430 03:24:25.249019 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-bpf-maps\") pod \"cilium-ghpvk\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " pod="kube-system/cilium-ghpvk" Apr 30 03:24:25.249520 kubelet[2501]: I0430 03:24:25.249051 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-xtables-lock\") pod \"cilium-ghpvk\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " pod="kube-system/cilium-ghpvk" Apr 30 03:24:25.249520 kubelet[2501]: I0430 03:24:25.249083 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-host-proc-sys-net\") pod \"cilium-ghpvk\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " pod="kube-system/cilium-ghpvk" Apr 30 03:24:25.249520 kubelet[2501]: I0430 03:24:25.249108 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-host-proc-sys-kernel\") pod \"cilium-ghpvk\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " pod="kube-system/cilium-ghpvk" Apr 30 03:24:25.410521 systemd[1]: Created slice kubepods-besteffort-podbccd8a08_42f5_478c_a2d0_833e44ac5978.slice - libcontainer container kubepods-besteffort-podbccd8a08_42f5_478c_a2d0_833e44ac5978.slice. Apr 30 03:24:25.450309 kubelet[2501]: I0430 03:24:25.450272 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bccd8a08-42f5-478c-a2d0-833e44ac5978-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-mkvkd\" (UID: \"bccd8a08-42f5-478c-a2d0-833e44ac5978\") " pod="kube-system/cilium-operator-6c4d7847fc-mkvkd" Apr 30 03:24:25.450309 kubelet[2501]: I0430 03:24:25.450312 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2bfj\" (UniqueName: \"kubernetes.io/projected/bccd8a08-42f5-478c-a2d0-833e44ac5978-kube-api-access-h2bfj\") pod \"cilium-operator-6c4d7847fc-mkvkd\" (UID: \"bccd8a08-42f5-478c-a2d0-833e44ac5978\") " pod="kube-system/cilium-operator-6c4d7847fc-mkvkd" Apr 30 03:24:25.510605 kubelet[2501]: E0430 03:24:25.510566 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:25.511186 containerd[1466]: time="2025-04-30T03:24:25.511147086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vdsl9,Uid:7544b32a-9b75-4adb-81c5-d4708d7fc932,Namespace:kube-system,Attempt:0,}" Apr 30 03:24:25.549042 kubelet[2501]: E0430 03:24:25.548611 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:25.549345 containerd[1466]: time="2025-04-30T03:24:25.549265976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ghpvk,Uid:354a3d62-5ab7-4b54-8289-98acde9e6e04,Namespace:kube-system,Attempt:0,}" Apr 30 03:24:25.555004 containerd[1466]: time="2025-04-30T03:24:25.554509057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:24:25.555004 containerd[1466]: time="2025-04-30T03:24:25.554624037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:24:25.555004 containerd[1466]: time="2025-04-30T03:24:25.554682219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:25.555004 containerd[1466]: time="2025-04-30T03:24:25.554885370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:25.586069 systemd[1]: Started cri-containerd-a979b4e4f82011c49bc460a357b4b1889b63218a2cb9ee0274f1bc8f026edc80.scope - libcontainer container a979b4e4f82011c49bc460a357b4b1889b63218a2cb9ee0274f1bc8f026edc80. Apr 30 03:24:25.625580 containerd[1466]: time="2025-04-30T03:24:25.625342548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vdsl9,Uid:7544b32a-9b75-4adb-81c5-d4708d7fc932,Namespace:kube-system,Attempt:0,} returns sandbox id \"a979b4e4f82011c49bc460a357b4b1889b63218a2cb9ee0274f1bc8f026edc80\"" Apr 30 03:24:25.626901 kubelet[2501]: E0430 03:24:25.626605 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:25.629031 containerd[1466]: time="2025-04-30T03:24:25.628046168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:24:25.630261 containerd[1466]: time="2025-04-30T03:24:25.629019499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:24:25.630261 containerd[1466]: time="2025-04-30T03:24:25.629054701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:25.630261 containerd[1466]: time="2025-04-30T03:24:25.629172629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:25.630261 containerd[1466]: time="2025-04-30T03:24:25.629947071Z" level=info msg="CreateContainer within sandbox \"a979b4e4f82011c49bc460a357b4b1889b63218a2cb9ee0274f1bc8f026edc80\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 03:24:25.653183 systemd[1]: Started cri-containerd-46c0e8877e742aa70a64b2f5f73bf1287af992669af76b1f2793e8e2134efaf7.scope - libcontainer container 46c0e8877e742aa70a64b2f5f73bf1287af992669af76b1f2793e8e2134efaf7. Apr 30 03:24:25.654932 containerd[1466]: time="2025-04-30T03:24:25.654871380Z" level=info msg="CreateContainer within sandbox \"a979b4e4f82011c49bc460a357b4b1889b63218a2cb9ee0274f1bc8f026edc80\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6861f0a1aec2188201589ade52b84a3c1fae6c4cf611e3d6a1c9b6a5a98921dc\"" Apr 30 03:24:25.655414 containerd[1466]: time="2025-04-30T03:24:25.655383577Z" level=info msg="StartContainer for \"6861f0a1aec2188201589ade52b84a3c1fae6c4cf611e3d6a1c9b6a5a98921dc\"" Apr 30 03:24:25.683615 containerd[1466]: time="2025-04-30T03:24:25.683516154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ghpvk,Uid:354a3d62-5ab7-4b54-8289-98acde9e6e04,Namespace:kube-system,Attempt:0,} returns sandbox id \"46c0e8877e742aa70a64b2f5f73bf1287af992669af76b1f2793e8e2134efaf7\"" Apr 30 03:24:25.684479 kubelet[2501]: E0430 03:24:25.684302 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:25.687464 containerd[1466]: time="2025-04-30T03:24:25.687331275Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 03:24:25.697188 systemd[1]: Started cri-containerd-6861f0a1aec2188201589ade52b84a3c1fae6c4cf611e3d6a1c9b6a5a98921dc.scope - libcontainer container 6861f0a1aec2188201589ade52b84a3c1fae6c4cf611e3d6a1c9b6a5a98921dc. Apr 30 03:24:25.713590 kubelet[2501]: E0430 03:24:25.713505 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:25.714858 containerd[1466]: time="2025-04-30T03:24:25.714792548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-mkvkd,Uid:bccd8a08-42f5-478c-a2d0-833e44ac5978,Namespace:kube-system,Attempt:0,}" Apr 30 03:24:25.737349 containerd[1466]: time="2025-04-30T03:24:25.737271272Z" level=info msg="StartContainer for \"6861f0a1aec2188201589ade52b84a3c1fae6c4cf611e3d6a1c9b6a5a98921dc\" returns successfully" Apr 30 03:24:25.758251 containerd[1466]: time="2025-04-30T03:24:25.758152304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:24:25.758996 containerd[1466]: time="2025-04-30T03:24:25.758755328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:24:25.758996 containerd[1466]: time="2025-04-30T03:24:25.758824930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:25.759059 containerd[1466]: time="2025-04-30T03:24:25.759016210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:24:25.778093 systemd[1]: Started cri-containerd-5d7afcc93770fd8cf41b417715814f4d48adffb8922444053ae6a3932f4822d8.scope - libcontainer container 5d7afcc93770fd8cf41b417715814f4d48adffb8922444053ae6a3932f4822d8. Apr 30 03:24:25.820035 containerd[1466]: time="2025-04-30T03:24:25.819905005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-mkvkd,Uid:bccd8a08-42f5-478c-a2d0-833e44ac5978,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d7afcc93770fd8cf41b417715814f4d48adffb8922444053ae6a3932f4822d8\"" Apr 30 03:24:25.821199 kubelet[2501]: E0430 03:24:25.821158 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:26.247322 kubelet[2501]: E0430 03:24:26.247295 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:26.257219 kubelet[2501]: I0430 03:24:26.257126 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vdsl9" podStartSLOduration=2.257104496 podStartE2EDuration="2.257104496s" podCreationTimestamp="2025-04-30 03:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:24:26.257002192 +0000 UTC m=+7.126827137" watchObservedRunningTime="2025-04-30 03:24:26.257104496 +0000 UTC m=+7.126929440" Apr 30 03:24:30.291829 update_engine[1447]: I20250430 03:24:30.291662 1447 update_attempter.cc:509] Updating boot flags... Apr 30 03:24:30.556967 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2880) Apr 30 03:24:30.605986 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2885) Apr 30 03:24:30.643391 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2885) Apr 30 03:24:30.839317 kubelet[2501]: E0430 03:24:30.839267 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:32.670876 kubelet[2501]: E0430 03:24:32.670525 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:32.717935 kubelet[2501]: E0430 03:24:32.717868 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:33.258936 kubelet[2501]: E0430 03:24:33.258882 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:24:47.259464 systemd[1]: Started sshd@7-10.0.0.59:22-10.0.0.1:47774.service - OpenSSH per-connection server daemon (10.0.0.1:47774). Apr 30 03:24:47.304175 sshd[2890]: Accepted publickey for core from 10.0.0.1 port 47774 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:24:47.306260 sshd[2890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:24:47.311240 systemd-logind[1446]: New session 8 of user core. Apr 30 03:24:47.320258 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 03:24:47.475746 sshd[2890]: pam_unix(sshd:session): session closed for user core Apr 30 03:24:47.481093 systemd[1]: sshd@7-10.0.0.59:22-10.0.0.1:47774.service: Deactivated successfully. Apr 30 03:24:47.483306 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 03:24:47.483968 systemd-logind[1446]: Session 8 logged out. Waiting for processes to exit. Apr 30 03:24:47.485191 systemd-logind[1446]: Removed session 8. Apr 30 03:24:52.488606 systemd[1]: Started sshd@8-10.0.0.59:22-10.0.0.1:47782.service - OpenSSH per-connection server daemon (10.0.0.1:47782). Apr 30 03:24:52.529434 sshd[2905]: Accepted publickey for core from 10.0.0.1 port 47782 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:24:52.531507 sshd[2905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:24:52.536657 systemd-logind[1446]: New session 9 of user core. Apr 30 03:24:52.548216 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 03:24:52.700794 sshd[2905]: pam_unix(sshd:session): session closed for user core Apr 30 03:24:52.704452 systemd[1]: sshd@8-10.0.0.59:22-10.0.0.1:47782.service: Deactivated successfully. Apr 30 03:24:52.708119 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 03:24:52.709371 systemd-logind[1446]: Session 9 logged out. Waiting for processes to exit. Apr 30 03:24:52.710453 systemd-logind[1446]: Removed session 9. Apr 30 03:24:56.939070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3581622341.mount: Deactivated successfully. Apr 30 03:24:57.713520 systemd[1]: Started sshd@9-10.0.0.59:22-10.0.0.1:46262.service - OpenSSH per-connection server daemon (10.0.0.1:46262). Apr 30 03:24:57.784531 sshd[2927]: Accepted publickey for core from 10.0.0.1 port 46262 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:24:57.786724 sshd[2927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:24:57.791788 systemd-logind[1446]: New session 10 of user core. Apr 30 03:24:57.800147 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 03:24:57.988465 sshd[2927]: pam_unix(sshd:session): session closed for user core Apr 30 03:24:57.992662 systemd-logind[1446]: Session 10 logged out. Waiting for processes to exit. Apr 30 03:24:57.993208 systemd[1]: sshd@9-10.0.0.59:22-10.0.0.1:46262.service: Deactivated successfully. Apr 30 03:24:57.995500 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 03:24:57.998435 systemd-logind[1446]: Removed session 10. Apr 30 03:24:59.760435 containerd[1466]: time="2025-04-30T03:24:59.760312067Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:59.761465 containerd[1466]: time="2025-04-30T03:24:59.761365027Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 30 03:24:59.762938 containerd[1466]: time="2025-04-30T03:24:59.762858140Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:24:59.767968 containerd[1466]: time="2025-04-30T03:24:59.765235373Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 34.077846749s" Apr 30 03:24:59.767968 containerd[1466]: time="2025-04-30T03:24:59.765307141Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 30 03:24:59.769374 containerd[1466]: time="2025-04-30T03:24:59.769314731Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 03:24:59.770801 containerd[1466]: time="2025-04-30T03:24:59.770746797Z" level=info msg="CreateContainer within sandbox \"46c0e8877e742aa70a64b2f5f73bf1287af992669af76b1f2793e8e2134efaf7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 03:24:59.792941 containerd[1466]: time="2025-04-30T03:24:59.792861224Z" level=info msg="CreateContainer within sandbox \"46c0e8877e742aa70a64b2f5f73bf1287af992669af76b1f2793e8e2134efaf7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e72ec6e7b583794bb1dd688d8a55e0fba633c724da61cfed6d7114a1d9e37977\"" Apr 30 03:24:59.793642 containerd[1466]: time="2025-04-30T03:24:59.793590404Z" level=info msg="StartContainer for \"e72ec6e7b583794bb1dd688d8a55e0fba633c724da61cfed6d7114a1d9e37977\"" Apr 30 03:24:59.840327 systemd[1]: Started cri-containerd-e72ec6e7b583794bb1dd688d8a55e0fba633c724da61cfed6d7114a1d9e37977.scope - libcontainer container e72ec6e7b583794bb1dd688d8a55e0fba633c724da61cfed6d7114a1d9e37977. Apr 30 03:24:59.870700 containerd[1466]: time="2025-04-30T03:24:59.870648266Z" level=info msg="StartContainer for \"e72ec6e7b583794bb1dd688d8a55e0fba633c724da61cfed6d7114a1d9e37977\" returns successfully" Apr 30 03:24:59.884570 systemd[1]: cri-containerd-e72ec6e7b583794bb1dd688d8a55e0fba633c724da61cfed6d7114a1d9e37977.scope: Deactivated successfully. Apr 30 03:25:00.310609 kubelet[2501]: E0430 03:25:00.310557 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:00.656636 containerd[1466]: time="2025-04-30T03:25:00.656463183Z" level=info msg="shim disconnected" id=e72ec6e7b583794bb1dd688d8a55e0fba633c724da61cfed6d7114a1d9e37977 namespace=k8s.io Apr 30 03:25:00.656636 containerd[1466]: time="2025-04-30T03:25:00.656525712Z" level=warning msg="cleaning up after shim disconnected" id=e72ec6e7b583794bb1dd688d8a55e0fba633c724da61cfed6d7114a1d9e37977 namespace=k8s.io Apr 30 03:25:00.656636 containerd[1466]: time="2025-04-30T03:25:00.656537215Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:25:00.784257 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e72ec6e7b583794bb1dd688d8a55e0fba633c724da61cfed6d7114a1d9e37977-rootfs.mount: Deactivated successfully. Apr 30 03:25:01.313368 kubelet[2501]: E0430 03:25:01.313323 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:01.314973 containerd[1466]: time="2025-04-30T03:25:01.314931802Z" level=info msg="CreateContainer within sandbox \"46c0e8877e742aa70a64b2f5f73bf1287af992669af76b1f2793e8e2134efaf7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 03:25:01.472556 containerd[1466]: time="2025-04-30T03:25:01.472488518Z" level=info msg="CreateContainer within sandbox \"46c0e8877e742aa70a64b2f5f73bf1287af992669af76b1f2793e8e2134efaf7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f83bb9ce0efdb19da509d34dcd446812e7a42351407a0e7a9a983a214f05fc3d\"" Apr 30 03:25:01.473108 containerd[1466]: time="2025-04-30T03:25:01.473075759Z" level=info msg="StartContainer for \"f83bb9ce0efdb19da509d34dcd446812e7a42351407a0e7a9a983a214f05fc3d\"" Apr 30 03:25:01.509138 systemd[1]: Started cri-containerd-f83bb9ce0efdb19da509d34dcd446812e7a42351407a0e7a9a983a214f05fc3d.scope - libcontainer container f83bb9ce0efdb19da509d34dcd446812e7a42351407a0e7a9a983a214f05fc3d. Apr 30 03:25:01.548135 containerd[1466]: time="2025-04-30T03:25:01.548074472Z" level=info msg="StartContainer for \"f83bb9ce0efdb19da509d34dcd446812e7a42351407a0e7a9a983a214f05fc3d\" returns successfully" Apr 30 03:25:01.561056 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:25:01.561315 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:25:01.561392 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:25:01.571491 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:25:01.571967 systemd[1]: cri-containerd-f83bb9ce0efdb19da509d34dcd446812e7a42351407a0e7a9a983a214f05fc3d.scope: Deactivated successfully. Apr 30 03:25:01.598115 containerd[1466]: time="2025-04-30T03:25:01.598060008Z" level=info msg="shim disconnected" id=f83bb9ce0efdb19da509d34dcd446812e7a42351407a0e7a9a983a214f05fc3d namespace=k8s.io Apr 30 03:25:01.598115 containerd[1466]: time="2025-04-30T03:25:01.598115081Z" level=warning msg="cleaning up after shim disconnected" id=f83bb9ce0efdb19da509d34dcd446812e7a42351407a0e7a9a983a214f05fc3d namespace=k8s.io Apr 30 03:25:01.598375 containerd[1466]: time="2025-04-30T03:25:01.598123619Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:25:01.598854 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:25:01.784375 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f83bb9ce0efdb19da509d34dcd446812e7a42351407a0e7a9a983a214f05fc3d-rootfs.mount: Deactivated successfully. Apr 30 03:25:02.061903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3604480078.mount: Deactivated successfully. Apr 30 03:25:02.316732 kubelet[2501]: E0430 03:25:02.316458 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:02.318568 containerd[1466]: time="2025-04-30T03:25:02.318468682Z" level=info msg="CreateContainer within sandbox \"46c0e8877e742aa70a64b2f5f73bf1287af992669af76b1f2793e8e2134efaf7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 03:25:02.343579 containerd[1466]: time="2025-04-30T03:25:02.343518400Z" level=info msg="CreateContainer within sandbox \"46c0e8877e742aa70a64b2f5f73bf1287af992669af76b1f2793e8e2134efaf7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6fc1d65ee494c31bae33baf62903d8c6a279b6277bcff957ad35a13969ab3d0e\"" Apr 30 03:25:02.344226 containerd[1466]: time="2025-04-30T03:25:02.344089426Z" level=info msg="StartContainer for \"6fc1d65ee494c31bae33baf62903d8c6a279b6277bcff957ad35a13969ab3d0e\"" Apr 30 03:25:02.379072 systemd[1]: Started cri-containerd-6fc1d65ee494c31bae33baf62903d8c6a279b6277bcff957ad35a13969ab3d0e.scope - libcontainer container 6fc1d65ee494c31bae33baf62903d8c6a279b6277bcff957ad35a13969ab3d0e. Apr 30 03:25:02.414078 containerd[1466]: time="2025-04-30T03:25:02.414015994Z" level=info msg="StartContainer for \"6fc1d65ee494c31bae33baf62903d8c6a279b6277bcff957ad35a13969ab3d0e\" returns successfully" Apr 30 03:25:02.416721 systemd[1]: cri-containerd-6fc1d65ee494c31bae33baf62903d8c6a279b6277bcff957ad35a13969ab3d0e.scope: Deactivated successfully. Apr 30 03:25:02.457864 containerd[1466]: time="2025-04-30T03:25:02.457789474Z" level=info msg="shim disconnected" id=6fc1d65ee494c31bae33baf62903d8c6a279b6277bcff957ad35a13969ab3d0e namespace=k8s.io Apr 30 03:25:02.458218 containerd[1466]: time="2025-04-30T03:25:02.458184076Z" level=warning msg="cleaning up after shim disconnected" id=6fc1d65ee494c31bae33baf62903d8c6a279b6277bcff957ad35a13969ab3d0e namespace=k8s.io Apr 30 03:25:02.458218 containerd[1466]: time="2025-04-30T03:25:02.458201161Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:25:03.004010 systemd[1]: Started sshd@10-10.0.0.59:22-10.0.0.1:46276.service - OpenSSH per-connection server daemon (10.0.0.1:46276). Apr 30 03:25:03.049866 sshd[3160]: Accepted publickey for core from 10.0.0.1 port 46276 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:25:03.052504 sshd[3160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:03.058340 systemd-logind[1446]: New session 11 of user core. Apr 30 03:25:03.071282 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 03:25:03.260460 sshd[3160]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:03.266584 systemd[1]: sshd@10-10.0.0.59:22-10.0.0.1:46276.service: Deactivated successfully. Apr 30 03:25:03.269105 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 03:25:03.269887 systemd-logind[1446]: Session 11 logged out. Waiting for processes to exit. Apr 30 03:25:03.271424 systemd-logind[1446]: Removed session 11. Apr 30 03:25:03.275698 containerd[1466]: time="2025-04-30T03:25:03.275627517Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:25:03.276548 containerd[1466]: time="2025-04-30T03:25:03.276497275Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 30 03:25:03.277904 containerd[1466]: time="2025-04-30T03:25:03.277864405Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:25:03.279860 containerd[1466]: time="2025-04-30T03:25:03.279779812Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.510400589s" Apr 30 03:25:03.279930 containerd[1466]: time="2025-04-30T03:25:03.279863193Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 30 03:25:03.282710 containerd[1466]: time="2025-04-30T03:25:03.282657367Z" level=info msg="CreateContainer within sandbox \"5d7afcc93770fd8cf41b417715814f4d48adffb8922444053ae6a3932f4822d8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 03:25:03.299573 containerd[1466]: time="2025-04-30T03:25:03.299515198Z" level=info msg="CreateContainer within sandbox \"5d7afcc93770fd8cf41b417715814f4d48adffb8922444053ae6a3932f4822d8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3c914634291114a1cedd479875c7aef6e6fe634d8b4663d55b64f428d0582e21\"" Apr 30 03:25:03.300258 containerd[1466]: time="2025-04-30T03:25:03.300210227Z" level=info msg="StartContainer for \"3c914634291114a1cedd479875c7aef6e6fe634d8b4663d55b64f428d0582e21\"" Apr 30 03:25:03.322749 kubelet[2501]: E0430 03:25:03.322696 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:03.325852 containerd[1466]: time="2025-04-30T03:25:03.325658507Z" level=info msg="CreateContainer within sandbox \"46c0e8877e742aa70a64b2f5f73bf1287af992669af76b1f2793e8e2134efaf7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 03:25:03.339088 systemd[1]: Started cri-containerd-3c914634291114a1cedd479875c7aef6e6fe634d8b4663d55b64f428d0582e21.scope - libcontainer container 3c914634291114a1cedd479875c7aef6e6fe634d8b4663d55b64f428d0582e21. Apr 30 03:25:03.343326 containerd[1466]: time="2025-04-30T03:25:03.343183779Z" level=info msg="CreateContainer within sandbox \"46c0e8877e742aa70a64b2f5f73bf1287af992669af76b1f2793e8e2134efaf7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"67670762fb1a2e279ba2a715a472f7a376ff19c424c2546354a9260234c25e8e\"" Apr 30 03:25:03.345391 containerd[1466]: time="2025-04-30T03:25:03.345363009Z" level=info msg="StartContainer for \"67670762fb1a2e279ba2a715a472f7a376ff19c424c2546354a9260234c25e8e\"" Apr 30 03:25:03.377244 containerd[1466]: time="2025-04-30T03:25:03.373785903Z" level=info msg="StartContainer for \"3c914634291114a1cedd479875c7aef6e6fe634d8b4663d55b64f428d0582e21\" returns successfully" Apr 30 03:25:03.383235 systemd[1]: Started cri-containerd-67670762fb1a2e279ba2a715a472f7a376ff19c424c2546354a9260234c25e8e.scope - libcontainer container 67670762fb1a2e279ba2a715a472f7a376ff19c424c2546354a9260234c25e8e. Apr 30 03:25:03.418504 systemd[1]: cri-containerd-67670762fb1a2e279ba2a715a472f7a376ff19c424c2546354a9260234c25e8e.scope: Deactivated successfully. Apr 30 03:25:03.510743 containerd[1466]: time="2025-04-30T03:25:03.510550876Z" level=info msg="StartContainer for \"67670762fb1a2e279ba2a715a472f7a376ff19c424c2546354a9260234c25e8e\" returns successfully" Apr 30 03:25:03.663569 containerd[1466]: time="2025-04-30T03:25:03.663461314Z" level=info msg="shim disconnected" id=67670762fb1a2e279ba2a715a472f7a376ff19c424c2546354a9260234c25e8e namespace=k8s.io Apr 30 03:25:03.663569 containerd[1466]: time="2025-04-30T03:25:03.663565639Z" level=warning msg="cleaning up after shim disconnected" id=67670762fb1a2e279ba2a715a472f7a376ff19c424c2546354a9260234c25e8e namespace=k8s.io Apr 30 03:25:03.663569 containerd[1466]: time="2025-04-30T03:25:03.663577313Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:25:04.326953 kubelet[2501]: E0430 03:25:04.326700 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:04.331948 kubelet[2501]: E0430 03:25:04.331612 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:04.334183 containerd[1466]: time="2025-04-30T03:25:04.333995336Z" level=info msg="CreateContainer within sandbox \"46c0e8877e742aa70a64b2f5f73bf1287af992669af76b1f2793e8e2134efaf7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 03:25:04.675640 containerd[1466]: time="2025-04-30T03:25:04.675458992Z" level=info msg="CreateContainer within sandbox \"46c0e8877e742aa70a64b2f5f73bf1287af992669af76b1f2793e8e2134efaf7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"af44bad55ad745b041a5482e14250aea960afbc939ee53f36da3f004f8b57ec0\"" Apr 30 03:25:04.676187 containerd[1466]: time="2025-04-30T03:25:04.676118997Z" level=info msg="StartContainer for \"af44bad55ad745b041a5482e14250aea960afbc939ee53f36da3f004f8b57ec0\"" Apr 30 03:25:04.734097 systemd[1]: Started cri-containerd-af44bad55ad745b041a5482e14250aea960afbc939ee53f36da3f004f8b57ec0.scope - libcontainer container af44bad55ad745b041a5482e14250aea960afbc939ee53f36da3f004f8b57ec0. Apr 30 03:25:04.852047 containerd[1466]: time="2025-04-30T03:25:04.851977891Z" level=info msg="StartContainer for \"af44bad55ad745b041a5482e14250aea960afbc939ee53f36da3f004f8b57ec0\" returns successfully" Apr 30 03:25:04.892496 kubelet[2501]: I0430 03:25:04.892429 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-mkvkd" podStartSLOduration=2.433327936 podStartE2EDuration="39.892407803s" podCreationTimestamp="2025-04-30 03:24:25 +0000 UTC" firstStartedPulling="2025-04-30 03:24:25.821897136 +0000 UTC m=+6.691722070" lastFinishedPulling="2025-04-30 03:25:03.280977003 +0000 UTC m=+44.150801937" observedRunningTime="2025-04-30 03:25:04.715223438 +0000 UTC m=+45.585048372" watchObservedRunningTime="2025-04-30 03:25:04.892407803 +0000 UTC m=+45.762232737" Apr 30 03:25:05.140242 kubelet[2501]: I0430 03:25:05.140195 2501 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Apr 30 03:25:05.335400 kubelet[2501]: E0430 03:25:05.335365 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:05.335838 kubelet[2501]: E0430 03:25:05.335648 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:05.367790 systemd[1]: Created slice kubepods-burstable-pod8f1408e9_e983_45b8_a778_f590b0a4361b.slice - libcontainer container kubepods-burstable-pod8f1408e9_e983_45b8_a778_f590b0a4361b.slice. Apr 30 03:25:05.372734 systemd[1]: Created slice kubepods-burstable-pod786016a3_3e9f_4c79_afd3_4ed91c118fbe.slice - libcontainer container kubepods-burstable-pod786016a3_3e9f_4c79_afd3_4ed91c118fbe.slice. Apr 30 03:25:05.407539 kubelet[2501]: I0430 03:25:05.407191 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/786016a3-3e9f-4c79-afd3-4ed91c118fbe-config-volume\") pod \"coredns-668d6bf9bc-259vr\" (UID: \"786016a3-3e9f-4c79-afd3-4ed91c118fbe\") " pod="kube-system/coredns-668d6bf9bc-259vr" Apr 30 03:25:05.407539 kubelet[2501]: I0430 03:25:05.407243 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsp6t\" (UniqueName: \"kubernetes.io/projected/786016a3-3e9f-4c79-afd3-4ed91c118fbe-kube-api-access-bsp6t\") pod \"coredns-668d6bf9bc-259vr\" (UID: \"786016a3-3e9f-4c79-afd3-4ed91c118fbe\") " pod="kube-system/coredns-668d6bf9bc-259vr" Apr 30 03:25:05.407539 kubelet[2501]: I0430 03:25:05.407274 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f1408e9-e983-45b8-a778-f590b0a4361b-config-volume\") pod \"coredns-668d6bf9bc-wjkwb\" (UID: \"8f1408e9-e983-45b8-a778-f590b0a4361b\") " pod="kube-system/coredns-668d6bf9bc-wjkwb" Apr 30 03:25:05.407539 kubelet[2501]: I0430 03:25:05.407289 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bkcb\" (UniqueName: \"kubernetes.io/projected/8f1408e9-e983-45b8-a778-f590b0a4361b-kube-api-access-2bkcb\") pod \"coredns-668d6bf9bc-wjkwb\" (UID: \"8f1408e9-e983-45b8-a778-f590b0a4361b\") " pod="kube-system/coredns-668d6bf9bc-wjkwb" Apr 30 03:25:05.566660 kubelet[2501]: I0430 03:25:05.566564 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ghpvk" podStartSLOduration=6.484297128 podStartE2EDuration="40.566542164s" podCreationTimestamp="2025-04-30 03:24:25 +0000 UTC" firstStartedPulling="2025-04-30 03:24:25.686815488 +0000 UTC m=+6.556640422" lastFinishedPulling="2025-04-30 03:24:59.769060524 +0000 UTC m=+40.638885458" observedRunningTime="2025-04-30 03:25:05.425249461 +0000 UTC m=+46.295074395" watchObservedRunningTime="2025-04-30 03:25:05.566542164 +0000 UTC m=+46.436367088" Apr 30 03:25:05.671257 kubelet[2501]: E0430 03:25:05.671091 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:05.679954 containerd[1466]: time="2025-04-30T03:25:05.675577667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wjkwb,Uid:8f1408e9-e983-45b8-a778-f590b0a4361b,Namespace:kube-system,Attempt:0,}" Apr 30 03:25:05.679954 containerd[1466]: time="2025-04-30T03:25:05.677349127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-259vr,Uid:786016a3-3e9f-4c79-afd3-4ed91c118fbe,Namespace:kube-system,Attempt:0,}" Apr 30 03:25:05.680554 kubelet[2501]: E0430 03:25:05.676226 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:06.336879 kubelet[2501]: E0430 03:25:06.336826 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:07.338223 kubelet[2501]: E0430 03:25:07.338171 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:08.054696 systemd-networkd[1380]: cilium_host: Link UP Apr 30 03:25:08.054894 systemd-networkd[1380]: cilium_net: Link UP Apr 30 03:25:08.055134 systemd-networkd[1380]: cilium_net: Gained carrier Apr 30 03:25:08.055372 systemd-networkd[1380]: cilium_host: Gained carrier Apr 30 03:25:08.078640 systemd-networkd[1380]: cilium_host: Gained IPv6LL Apr 30 03:25:08.198395 systemd-networkd[1380]: cilium_vxlan: Link UP Apr 30 03:25:08.198412 systemd-networkd[1380]: cilium_vxlan: Gained carrier Apr 30 03:25:08.272427 systemd[1]: Started sshd@11-10.0.0.59:22-10.0.0.1:34828.service - OpenSSH per-connection server daemon (10.0.0.1:34828). Apr 30 03:25:08.314843 sshd[3502]: Accepted publickey for core from 10.0.0.1 port 34828 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:25:08.317154 sshd[3502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:08.322227 systemd-logind[1446]: New session 12 of user core. Apr 30 03:25:08.326134 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 03:25:08.473001 kernel: NET: Registered PF_ALG protocol family Apr 30 03:25:08.473377 sshd[3502]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:08.478425 systemd[1]: sshd@11-10.0.0.59:22-10.0.0.1:34828.service: Deactivated successfully. Apr 30 03:25:08.480980 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 03:25:08.481886 systemd-logind[1446]: Session 12 logged out. Waiting for processes to exit. Apr 30 03:25:08.482927 systemd-logind[1446]: Removed session 12. Apr 30 03:25:08.534241 systemd-networkd[1380]: cilium_net: Gained IPv6LL Apr 30 03:25:09.223437 systemd-networkd[1380]: lxc_health: Link UP Apr 30 03:25:09.231571 systemd-networkd[1380]: lxc_health: Gained carrier Apr 30 03:25:09.491030 systemd-networkd[1380]: lxc9c2da3686f5a: Link UP Apr 30 03:25:09.558965 kernel: eth0: renamed from tmp140f0 Apr 30 03:25:09.562328 kubelet[2501]: E0430 03:25:09.561723 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:09.566334 systemd-networkd[1380]: lxc9c2da3686f5a: Gained carrier Apr 30 03:25:09.761111 systemd-networkd[1380]: lxc527cb3a7c1e7: Link UP Apr 30 03:25:09.779952 kernel: eth0: renamed from tmp8d7e3 Apr 30 03:25:09.795221 systemd-networkd[1380]: lxc527cb3a7c1e7: Gained carrier Apr 30 03:25:10.134219 systemd-networkd[1380]: cilium_vxlan: Gained IPv6LL Apr 30 03:25:10.347943 kubelet[2501]: E0430 03:25:10.347889 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:10.775220 systemd-networkd[1380]: lxc9c2da3686f5a: Gained IPv6LL Apr 30 03:25:10.838230 systemd-networkd[1380]: lxc527cb3a7c1e7: Gained IPv6LL Apr 30 03:25:10.902576 systemd-networkd[1380]: lxc_health: Gained IPv6LL Apr 30 03:25:11.349628 kubelet[2501]: E0430 03:25:11.349577 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:13.497036 systemd[1]: Started sshd@12-10.0.0.59:22-10.0.0.1:34844.service - OpenSSH per-connection server daemon (10.0.0.1:34844). Apr 30 03:25:13.550854 sshd[3810]: Accepted publickey for core from 10.0.0.1 port 34844 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:25:13.552176 sshd[3810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:13.574164 systemd-logind[1446]: New session 13 of user core. Apr 30 03:25:13.581047 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 03:25:13.645437 containerd[1466]: time="2025-04-30T03:25:13.644544634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:25:13.645437 containerd[1466]: time="2025-04-30T03:25:13.644620116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:25:13.645437 containerd[1466]: time="2025-04-30T03:25:13.644635809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:25:13.645437 containerd[1466]: time="2025-04-30T03:25:13.644772225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:25:13.667253 systemd[1]: run-containerd-runc-k8s.io-140f0562f272ace18b5ce619762a707e399f6579d2c11d845f4cedaf73577803-runc.AQMZm2.mount: Deactivated successfully. Apr 30 03:25:13.674214 systemd[1]: Started cri-containerd-140f0562f272ace18b5ce619762a707e399f6579d2c11d845f4cedaf73577803.scope - libcontainer container 140f0562f272ace18b5ce619762a707e399f6579d2c11d845f4cedaf73577803. Apr 30 03:25:13.674863 containerd[1466]: time="2025-04-30T03:25:13.673984916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:25:13.675832 containerd[1466]: time="2025-04-30T03:25:13.675748416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:25:13.675832 containerd[1466]: time="2025-04-30T03:25:13.675803007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:25:13.676056 containerd[1466]: time="2025-04-30T03:25:13.676015489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:25:13.694265 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 03:25:13.702147 systemd[1]: Started cri-containerd-8d7e3c9a653db8214945fbb03fe251bf540e03d8da22926bcfd96830680c5bd5.scope - libcontainer container 8d7e3c9a653db8214945fbb03fe251bf540e03d8da22926bcfd96830680c5bd5. Apr 30 03:25:13.716206 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 03:25:13.735399 containerd[1466]: time="2025-04-30T03:25:13.734208045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-259vr,Uid:786016a3-3e9f-4c79-afd3-4ed91c118fbe,Namespace:kube-system,Attempt:0,} returns sandbox id \"140f0562f272ace18b5ce619762a707e399f6579d2c11d845f4cedaf73577803\"" Apr 30 03:25:13.735653 kubelet[2501]: E0430 03:25:13.735612 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:13.742100 containerd[1466]: time="2025-04-30T03:25:13.742051863Z" level=info msg="CreateContainer within sandbox \"140f0562f272ace18b5ce619762a707e399f6579d2c11d845f4cedaf73577803\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:25:13.755972 containerd[1466]: time="2025-04-30T03:25:13.755734396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wjkwb,Uid:8f1408e9-e983-45b8-a778-f590b0a4361b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d7e3c9a653db8214945fbb03fe251bf540e03d8da22926bcfd96830680c5bd5\"" Apr 30 03:25:13.756846 kubelet[2501]: E0430 03:25:13.756741 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:13.759410 containerd[1466]: time="2025-04-30T03:25:13.759358807Z" level=info msg="CreateContainer within sandbox \"8d7e3c9a653db8214945fbb03fe251bf540e03d8da22926bcfd96830680c5bd5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:25:13.846064 sshd[3810]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:13.850860 systemd[1]: sshd@12-10.0.0.59:22-10.0.0.1:34844.service: Deactivated successfully. Apr 30 03:25:13.853405 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 03:25:13.854192 systemd-logind[1446]: Session 13 logged out. Waiting for processes to exit. Apr 30 03:25:13.855178 systemd-logind[1446]: Removed session 13. Apr 30 03:25:14.651428 containerd[1466]: time="2025-04-30T03:25:14.651332615Z" level=info msg="CreateContainer within sandbox \"140f0562f272ace18b5ce619762a707e399f6579d2c11d845f4cedaf73577803\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b3b3f6a02f44562856a906e4aa689d9d6bdf2a157ed90fd7eaba717c89803a38\"" Apr 30 03:25:14.652122 containerd[1466]: time="2025-04-30T03:25:14.652071395Z" level=info msg="StartContainer for \"b3b3f6a02f44562856a906e4aa689d9d6bdf2a157ed90fd7eaba717c89803a38\"" Apr 30 03:25:14.693062 systemd[1]: Started cri-containerd-b3b3f6a02f44562856a906e4aa689d9d6bdf2a157ed90fd7eaba717c89803a38.scope - libcontainer container b3b3f6a02f44562856a906e4aa689d9d6bdf2a157ed90fd7eaba717c89803a38. Apr 30 03:25:14.802591 containerd[1466]: time="2025-04-30T03:25:14.802544849Z" level=info msg="CreateContainer within sandbox \"8d7e3c9a653db8214945fbb03fe251bf540e03d8da22926bcfd96830680c5bd5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ce976db8a9e5cd210cec30a64987efcf6aa8d85ac5716b92675c0bb101951f5c\"" Apr 30 03:25:14.803278 containerd[1466]: time="2025-04-30T03:25:14.803116439Z" level=info msg="StartContainer for \"ce976db8a9e5cd210cec30a64987efcf6aa8d85ac5716b92675c0bb101951f5c\"" Apr 30 03:25:14.836075 systemd[1]: Started cri-containerd-ce976db8a9e5cd210cec30a64987efcf6aa8d85ac5716b92675c0bb101951f5c.scope - libcontainer container ce976db8a9e5cd210cec30a64987efcf6aa8d85ac5716b92675c0bb101951f5c. Apr 30 03:25:14.954406 containerd[1466]: time="2025-04-30T03:25:14.954219101Z" level=info msg="StartContainer for \"b3b3f6a02f44562856a906e4aa689d9d6bdf2a157ed90fd7eaba717c89803a38\" returns successfully" Apr 30 03:25:14.954406 containerd[1466]: time="2025-04-30T03:25:14.954275987Z" level=info msg="StartContainer for \"ce976db8a9e5cd210cec30a64987efcf6aa8d85ac5716b92675c0bb101951f5c\" returns successfully" Apr 30 03:25:15.363672 kubelet[2501]: E0430 03:25:15.363324 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:15.364784 kubelet[2501]: E0430 03:25:15.364762 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:15.486588 kubelet[2501]: I0430 03:25:15.486512 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-259vr" podStartSLOduration=50.486494263 podStartE2EDuration="50.486494263s" podCreationTimestamp="2025-04-30 03:24:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:25:15.407388172 +0000 UTC m=+56.277213106" watchObservedRunningTime="2025-04-30 03:25:15.486494263 +0000 UTC m=+56.356319197" Apr 30 03:25:15.486588 kubelet[2501]: I0430 03:25:15.486608 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wjkwb" podStartSLOduration=50.486603794 podStartE2EDuration="50.486603794s" podCreationTimestamp="2025-04-30 03:24:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:25:15.486416375 +0000 UTC m=+56.356241309" watchObservedRunningTime="2025-04-30 03:25:15.486603794 +0000 UTC m=+56.356428718" Apr 30 03:25:16.366664 kubelet[2501]: E0430 03:25:16.366552 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:16.368222 kubelet[2501]: E0430 03:25:16.366702 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:17.368500 kubelet[2501]: E0430 03:25:17.368262 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:17.368500 kubelet[2501]: E0430 03:25:17.368411 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:18.370086 kubelet[2501]: E0430 03:25:18.370049 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:18.860577 systemd[1]: Started sshd@13-10.0.0.59:22-10.0.0.1:38298.service - OpenSSH per-connection server daemon (10.0.0.1:38298). Apr 30 03:25:18.900512 sshd[3998]: Accepted publickey for core from 10.0.0.1 port 38298 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:25:18.902540 sshd[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:18.906935 systemd-logind[1446]: New session 14 of user core. Apr 30 03:25:18.918081 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 03:25:19.029470 sshd[3998]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:19.043766 systemd[1]: sshd@13-10.0.0.59:22-10.0.0.1:38298.service: Deactivated successfully. Apr 30 03:25:19.045570 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 03:25:19.047343 systemd-logind[1446]: Session 14 logged out. Waiting for processes to exit. Apr 30 03:25:19.057181 systemd[1]: Started sshd@14-10.0.0.59:22-10.0.0.1:38314.service - OpenSSH per-connection server daemon (10.0.0.1:38314). Apr 30 03:25:19.058100 systemd-logind[1446]: Removed session 14. Apr 30 03:25:19.089747 sshd[4013]: Accepted publickey for core from 10.0.0.1 port 38314 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:25:19.091462 sshd[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:19.095675 systemd-logind[1446]: New session 15 of user core. Apr 30 03:25:19.105061 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 03:25:19.264843 sshd[4013]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:19.276648 systemd[1]: sshd@14-10.0.0.59:22-10.0.0.1:38314.service: Deactivated successfully. Apr 30 03:25:19.278610 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 03:25:19.281830 systemd-logind[1446]: Session 15 logged out. Waiting for processes to exit. Apr 30 03:25:19.288281 systemd[1]: Started sshd@15-10.0.0.59:22-10.0.0.1:38324.service - OpenSSH per-connection server daemon (10.0.0.1:38324). Apr 30 03:25:19.289250 systemd-logind[1446]: Removed session 15. Apr 30 03:25:19.332288 sshd[4029]: Accepted publickey for core from 10.0.0.1 port 38324 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:25:19.334081 sshd[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:19.338013 systemd-logind[1446]: New session 16 of user core. Apr 30 03:25:19.348071 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 03:25:19.459604 sshd[4029]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:19.463494 systemd[1]: sshd@15-10.0.0.59:22-10.0.0.1:38324.service: Deactivated successfully. Apr 30 03:25:19.465468 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 03:25:19.466060 systemd-logind[1446]: Session 16 logged out. Waiting for processes to exit. Apr 30 03:25:19.466889 systemd-logind[1446]: Removed session 16. Apr 30 03:25:24.472150 systemd[1]: Started sshd@16-10.0.0.59:22-10.0.0.1:38334.service - OpenSSH per-connection server daemon (10.0.0.1:38334). Apr 30 03:25:24.511750 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 38334 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:25:24.513497 sshd[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:24.517812 systemd-logind[1446]: New session 17 of user core. Apr 30 03:25:24.529067 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 03:25:24.638080 sshd[4045]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:24.642003 systemd[1]: sshd@16-10.0.0.59:22-10.0.0.1:38334.service: Deactivated successfully. Apr 30 03:25:24.644372 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 03:25:24.645228 systemd-logind[1446]: Session 17 logged out. Waiting for processes to exit. Apr 30 03:25:24.646289 systemd-logind[1446]: Removed session 17. Apr 30 03:25:29.653329 systemd[1]: Started sshd@17-10.0.0.59:22-10.0.0.1:37928.service - OpenSSH per-connection server daemon (10.0.0.1:37928). Apr 30 03:25:29.690787 sshd[4062]: Accepted publickey for core from 10.0.0.1 port 37928 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:25:29.692653 sshd[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:29.696849 systemd-logind[1446]: New session 18 of user core. Apr 30 03:25:29.706052 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 03:25:29.826442 sshd[4062]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:29.837005 systemd[1]: sshd@17-10.0.0.59:22-10.0.0.1:37928.service: Deactivated successfully. Apr 30 03:25:29.839146 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 03:25:29.840679 systemd-logind[1446]: Session 18 logged out. Waiting for processes to exit. Apr 30 03:25:29.851458 systemd[1]: Started sshd@18-10.0.0.59:22-10.0.0.1:37936.service - OpenSSH per-connection server daemon (10.0.0.1:37936). Apr 30 03:25:29.852447 systemd-logind[1446]: Removed session 18. Apr 30 03:25:29.885414 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 37936 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:25:29.887268 sshd[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:29.892033 systemd-logind[1446]: New session 19 of user core. Apr 30 03:25:29.909054 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 03:25:30.179547 sshd[4076]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:30.192204 systemd[1]: sshd@18-10.0.0.59:22-10.0.0.1:37936.service: Deactivated successfully. Apr 30 03:25:30.194382 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 03:25:30.196454 systemd-logind[1446]: Session 19 logged out. Waiting for processes to exit. Apr 30 03:25:30.202455 systemd[1]: Started sshd@19-10.0.0.59:22-10.0.0.1:37948.service - OpenSSH per-connection server daemon (10.0.0.1:37948). Apr 30 03:25:30.203887 systemd-logind[1446]: Removed session 19. Apr 30 03:25:30.239380 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 37948 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:25:30.241411 sshd[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:30.245591 systemd-logind[1446]: New session 20 of user core. Apr 30 03:25:30.255084 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 03:25:31.179674 sshd[4089]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:31.191391 systemd[1]: sshd@19-10.0.0.59:22-10.0.0.1:37948.service: Deactivated successfully. Apr 30 03:25:31.194371 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 03:25:31.197279 systemd-logind[1446]: Session 20 logged out. Waiting for processes to exit. Apr 30 03:25:31.203332 systemd[1]: Started sshd@20-10.0.0.59:22-10.0.0.1:37950.service - OpenSSH per-connection server daemon (10.0.0.1:37950). Apr 30 03:25:31.206428 systemd-logind[1446]: Removed session 20. Apr 30 03:25:31.242481 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 37950 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:25:31.244656 sshd[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:31.249616 systemd-logind[1446]: New session 21 of user core. Apr 30 03:25:31.257149 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 03:25:31.505208 sshd[4116]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:31.517825 systemd[1]: sshd@20-10.0.0.59:22-10.0.0.1:37950.service: Deactivated successfully. Apr 30 03:25:31.520475 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 03:25:31.523383 systemd-logind[1446]: Session 21 logged out. Waiting for processes to exit. Apr 30 03:25:31.531359 systemd[1]: Started sshd@21-10.0.0.59:22-10.0.0.1:37966.service - OpenSSH per-connection server daemon (10.0.0.1:37966). Apr 30 03:25:31.532825 systemd-logind[1446]: Removed session 21. Apr 30 03:25:31.565662 sshd[4128]: Accepted publickey for core from 10.0.0.1 port 37966 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:25:31.567982 sshd[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:31.573421 systemd-logind[1446]: New session 22 of user core. Apr 30 03:25:31.580241 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 03:25:31.699253 sshd[4128]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:31.704187 systemd[1]: sshd@21-10.0.0.59:22-10.0.0.1:37966.service: Deactivated successfully. Apr 30 03:25:31.706750 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 03:25:31.707543 systemd-logind[1446]: Session 22 logged out. Waiting for processes to exit. Apr 30 03:25:31.708706 systemd-logind[1446]: Removed session 22. Apr 30 03:25:36.711419 systemd[1]: Started sshd@22-10.0.0.59:22-10.0.0.1:36110.service - OpenSSH per-connection server daemon (10.0.0.1:36110). Apr 30 03:25:36.749309 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 36110 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:25:36.751038 sshd[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:36.755846 systemd-logind[1446]: New session 23 of user core. Apr 30 03:25:36.765164 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 03:25:36.885216 sshd[4142]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:36.890519 systemd[1]: sshd@22-10.0.0.59:22-10.0.0.1:36110.service: Deactivated successfully. Apr 30 03:25:36.893521 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 03:25:36.895046 systemd-logind[1446]: Session 23 logged out. Waiting for processes to exit. Apr 30 03:25:36.896056 systemd-logind[1446]: Removed session 23. Apr 30 03:25:40.221523 kubelet[2501]: E0430 03:25:40.221481 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:41.221908 kubelet[2501]: E0430 03:25:41.221847 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:41.902210 systemd[1]: Started sshd@23-10.0.0.59:22-10.0.0.1:36124.service - OpenSSH per-connection server daemon (10.0.0.1:36124). Apr 30 03:25:41.942812 sshd[4158]: Accepted publickey for core from 10.0.0.1 port 36124 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:25:41.944807 sshd[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:41.949244 systemd-logind[1446]: New session 24 of user core. Apr 30 03:25:41.960186 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 03:25:42.073397 sshd[4158]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:42.078109 systemd[1]: sshd@23-10.0.0.59:22-10.0.0.1:36124.service: Deactivated successfully. Apr 30 03:25:42.081272 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 03:25:42.082325 systemd-logind[1446]: Session 24 logged out. Waiting for processes to exit. Apr 30 03:25:42.083433 systemd-logind[1446]: Removed session 24. Apr 30 03:25:46.222371 kubelet[2501]: E0430 03:25:46.222264 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:47.096849 systemd[1]: Started sshd@24-10.0.0.59:22-10.0.0.1:55296.service - OpenSSH per-connection server daemon (10.0.0.1:55296). Apr 30 03:25:47.139657 sshd[4172]: Accepted publickey for core from 10.0.0.1 port 55296 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:25:47.141530 sshd[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:47.146367 systemd-logind[1446]: New session 25 of user core. Apr 30 03:25:47.156063 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 03:25:47.266458 sshd[4172]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:47.271030 systemd[1]: sshd@24-10.0.0.59:22-10.0.0.1:55296.service: Deactivated successfully. Apr 30 03:25:47.273481 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 03:25:47.274194 systemd-logind[1446]: Session 25 logged out. Waiting for processes to exit. Apr 30 03:25:47.275505 systemd-logind[1446]: Removed session 25. Apr 30 03:25:52.283350 systemd[1]: Started sshd@25-10.0.0.59:22-10.0.0.1:55304.service - OpenSSH per-connection server daemon (10.0.0.1:55304). Apr 30 03:25:52.322673 sshd[4186]: Accepted publickey for core from 10.0.0.1 port 55304 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:25:52.324685 sshd[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:52.329892 systemd-logind[1446]: New session 26 of user core. Apr 30 03:25:52.339103 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 03:25:52.454201 sshd[4186]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:52.463586 systemd[1]: sshd@25-10.0.0.59:22-10.0.0.1:55304.service: Deactivated successfully. Apr 30 03:25:52.466357 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 03:25:52.468969 systemd-logind[1446]: Session 26 logged out. Waiting for processes to exit. Apr 30 03:25:52.476385 systemd[1]: Started sshd@26-10.0.0.59:22-10.0.0.1:55310.service - OpenSSH per-connection server daemon (10.0.0.1:55310). Apr 30 03:25:52.477681 systemd-logind[1446]: Removed session 26. Apr 30 03:25:52.511489 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 55310 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:25:52.513739 sshd[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:52.518733 systemd-logind[1446]: New session 27 of user core. Apr 30 03:25:52.529186 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 03:25:53.947174 containerd[1466]: time="2025-04-30T03:25:53.947105596Z" level=info msg="StopContainer for \"3c914634291114a1cedd479875c7aef6e6fe634d8b4663d55b64f428d0582e21\" with timeout 30 (s)" Apr 30 03:25:53.947825 containerd[1466]: time="2025-04-30T03:25:53.947578743Z" level=info msg="Stop container \"3c914634291114a1cedd479875c7aef6e6fe634d8b4663d55b64f428d0582e21\" with signal terminated" Apr 30 03:25:53.964721 systemd[1]: cri-containerd-3c914634291114a1cedd479875c7aef6e6fe634d8b4663d55b64f428d0582e21.scope: Deactivated successfully. Apr 30 03:25:53.987945 containerd[1466]: time="2025-04-30T03:25:53.985827945Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:25:53.992448 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c914634291114a1cedd479875c7aef6e6fe634d8b4663d55b64f428d0582e21-rootfs.mount: Deactivated successfully. Apr 30 03:25:54.012802 containerd[1466]: time="2025-04-30T03:25:54.012742536Z" level=info msg="StopContainer for \"af44bad55ad745b041a5482e14250aea960afbc939ee53f36da3f004f8b57ec0\" with timeout 2 (s)" Apr 30 03:25:54.013223 containerd[1466]: time="2025-04-30T03:25:54.013200066Z" level=info msg="Stop container \"af44bad55ad745b041a5482e14250aea960afbc939ee53f36da3f004f8b57ec0\" with signal terminated" Apr 30 03:25:54.022624 systemd-networkd[1380]: lxc_health: Link DOWN Apr 30 03:25:54.022635 systemd-networkd[1380]: lxc_health: Lost carrier Apr 30 03:25:54.054818 systemd[1]: cri-containerd-af44bad55ad745b041a5482e14250aea960afbc939ee53f36da3f004f8b57ec0.scope: Deactivated successfully. Apr 30 03:25:54.055423 systemd[1]: cri-containerd-af44bad55ad745b041a5482e14250aea960afbc939ee53f36da3f004f8b57ec0.scope: Consumed 7.670s CPU time. Apr 30 03:25:54.078552 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af44bad55ad745b041a5482e14250aea960afbc939ee53f36da3f004f8b57ec0-rootfs.mount: Deactivated successfully. Apr 30 03:25:54.172661 containerd[1466]: time="2025-04-30T03:25:54.172527170Z" level=info msg="shim disconnected" id=af44bad55ad745b041a5482e14250aea960afbc939ee53f36da3f004f8b57ec0 namespace=k8s.io Apr 30 03:25:54.172661 containerd[1466]: time="2025-04-30T03:25:54.172643250Z" level=warning msg="cleaning up after shim disconnected" id=af44bad55ad745b041a5482e14250aea960afbc939ee53f36da3f004f8b57ec0 namespace=k8s.io Apr 30 03:25:54.172661 containerd[1466]: time="2025-04-30T03:25:54.172655293Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:25:54.173217 containerd[1466]: time="2025-04-30T03:25:54.172875933Z" level=info msg="shim disconnected" id=3c914634291114a1cedd479875c7aef6e6fe634d8b4663d55b64f428d0582e21 namespace=k8s.io Apr 30 03:25:54.173217 containerd[1466]: time="2025-04-30T03:25:54.172982897Z" level=warning msg="cleaning up after shim disconnected" id=3c914634291114a1cedd479875c7aef6e6fe634d8b4663d55b64f428d0582e21 namespace=k8s.io Apr 30 03:25:54.173217 containerd[1466]: time="2025-04-30T03:25:54.172991603Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:25:54.249296 containerd[1466]: time="2025-04-30T03:25:54.249235920Z" level=info msg="StopContainer for \"3c914634291114a1cedd479875c7aef6e6fe634d8b4663d55b64f428d0582e21\" returns successfully" Apr 30 03:25:54.251029 containerd[1466]: time="2025-04-30T03:25:54.250954797Z" level=info msg="StopContainer for \"af44bad55ad745b041a5482e14250aea960afbc939ee53f36da3f004f8b57ec0\" returns successfully" Apr 30 03:25:54.253703 containerd[1466]: time="2025-04-30T03:25:54.253644340Z" level=info msg="StopPodSandbox for \"46c0e8877e742aa70a64b2f5f73bf1287af992669af76b1f2793e8e2134efaf7\"" Apr 30 03:25:54.253759 containerd[1466]: time="2025-04-30T03:25:54.253714764Z" level=info msg="Container to stop \"67670762fb1a2e279ba2a715a472f7a376ff19c424c2546354a9260234c25e8e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:25:54.253759 containerd[1466]: time="2025-04-30T03:25:54.253735462Z" level=info msg="Container to stop \"af44bad55ad745b041a5482e14250aea960afbc939ee53f36da3f004f8b57ec0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:25:54.253759 containerd[1466]: time="2025-04-30T03:25:54.253749951Z" level=info msg="Container to stop \"e72ec6e7b583794bb1dd688d8a55e0fba633c724da61cfed6d7114a1d9e37977\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:25:54.253846 containerd[1466]: time="2025-04-30T03:25:54.253760931Z" level=info msg="Container to stop \"f83bb9ce0efdb19da509d34dcd446812e7a42351407a0e7a9a983a214f05fc3d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:25:54.253846 containerd[1466]: time="2025-04-30T03:25:54.253772824Z" level=info msg="Container to stop \"6fc1d65ee494c31bae33baf62903d8c6a279b6277bcff957ad35a13969ab3d0e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:25:54.256877 containerd[1466]: time="2025-04-30T03:25:54.256516819Z" level=info msg="StopPodSandbox for \"5d7afcc93770fd8cf41b417715814f4d48adffb8922444053ae6a3932f4822d8\"" Apr 30 03:25:54.256877 containerd[1466]: time="2025-04-30T03:25:54.256589538Z" level=info msg="Container to stop \"3c914634291114a1cedd479875c7aef6e6fe634d8b4663d55b64f428d0582e21\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:25:54.257145 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-46c0e8877e742aa70a64b2f5f73bf1287af992669af76b1f2793e8e2134efaf7-shm.mount: Deactivated successfully. Apr 30 03:25:54.260229 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5d7afcc93770fd8cf41b417715814f4d48adffb8922444053ae6a3932f4822d8-shm.mount: Deactivated successfully. Apr 30 03:25:54.261742 systemd[1]: cri-containerd-46c0e8877e742aa70a64b2f5f73bf1287af992669af76b1f2793e8e2134efaf7.scope: Deactivated successfully. Apr 30 03:25:54.266351 systemd[1]: cri-containerd-5d7afcc93770fd8cf41b417715814f4d48adffb8922444053ae6a3932f4822d8.scope: Deactivated successfully. Apr 30 03:25:54.282748 kubelet[2501]: E0430 03:25:54.282604 2501 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 03:25:54.292639 containerd[1466]: time="2025-04-30T03:25:54.292326598Z" level=info msg="shim disconnected" id=46c0e8877e742aa70a64b2f5f73bf1287af992669af76b1f2793e8e2134efaf7 namespace=k8s.io Apr 30 03:25:54.292639 containerd[1466]: time="2025-04-30T03:25:54.292402442Z" level=warning msg="cleaning up after shim disconnected" id=46c0e8877e742aa70a64b2f5f73bf1287af992669af76b1f2793e8e2134efaf7 namespace=k8s.io Apr 30 03:25:54.292639 containerd[1466]: time="2025-04-30T03:25:54.292418021Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:25:54.293051 containerd[1466]: time="2025-04-30T03:25:54.292836176Z" level=info msg="shim disconnected" id=5d7afcc93770fd8cf41b417715814f4d48adffb8922444053ae6a3932f4822d8 namespace=k8s.io Apr 30 03:25:54.293051 containerd[1466]: time="2025-04-30T03:25:54.292870351Z" level=warning msg="cleaning up after shim disconnected" id=5d7afcc93770fd8cf41b417715814f4d48adffb8922444053ae6a3932f4822d8 namespace=k8s.io Apr 30 03:25:54.293051 containerd[1466]: time="2025-04-30T03:25:54.292881102Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:25:54.326274 containerd[1466]: time="2025-04-30T03:25:54.326180148Z" level=info msg="TearDown network for sandbox \"5d7afcc93770fd8cf41b417715814f4d48adffb8922444053ae6a3932f4822d8\" successfully" Apr 30 03:25:54.326274 containerd[1466]: time="2025-04-30T03:25:54.326249680Z" level=info msg="StopPodSandbox for \"5d7afcc93770fd8cf41b417715814f4d48adffb8922444053ae6a3932f4822d8\" returns successfully" Apr 30 03:25:54.327609 containerd[1466]: time="2025-04-30T03:25:54.327559991Z" level=info msg="TearDown network for sandbox \"46c0e8877e742aa70a64b2f5f73bf1287af992669af76b1f2793e8e2134efaf7\" successfully" Apr 30 03:25:54.327905 containerd[1466]: time="2025-04-30T03:25:54.327684417Z" level=info msg="StopPodSandbox for \"46c0e8877e742aa70a64b2f5f73bf1287af992669af76b1f2793e8e2134efaf7\" returns successfully" Apr 30 03:25:54.400592 kubelet[2501]: I0430 03:25:54.400504 2501 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-etc-cni-netd\") pod \"354a3d62-5ab7-4b54-8289-98acde9e6e04\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " Apr 30 03:25:54.400592 kubelet[2501]: I0430 03:25:54.400595 2501 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bccd8a08-42f5-478c-a2d0-833e44ac5978-cilium-config-path\") pod \"bccd8a08-42f5-478c-a2d0-833e44ac5978\" (UID: \"bccd8a08-42f5-478c-a2d0-833e44ac5978\") " Apr 30 03:25:54.401062 kubelet[2501]: I0430 03:25:54.400626 2501 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-cilium-cgroup\") pod \"354a3d62-5ab7-4b54-8289-98acde9e6e04\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " Apr 30 03:25:54.401062 kubelet[2501]: I0430 03:25:54.400649 2501 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-host-proc-sys-net\") pod \"354a3d62-5ab7-4b54-8289-98acde9e6e04\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " Apr 30 03:25:54.401062 kubelet[2501]: I0430 03:25:54.400683 2501 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2bfj\" (UniqueName: \"kubernetes.io/projected/bccd8a08-42f5-478c-a2d0-833e44ac5978-kube-api-access-h2bfj\") pod \"bccd8a08-42f5-478c-a2d0-833e44ac5978\" (UID: \"bccd8a08-42f5-478c-a2d0-833e44ac5978\") " Apr 30 03:25:54.401062 kubelet[2501]: I0430 03:25:54.400711 2501 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88cwf\" (UniqueName: \"kubernetes.io/projected/354a3d62-5ab7-4b54-8289-98acde9e6e04-kube-api-access-88cwf\") pod \"354a3d62-5ab7-4b54-8289-98acde9e6e04\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " Apr 30 03:25:54.401062 kubelet[2501]: I0430 03:25:54.400735 2501 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/354a3d62-5ab7-4b54-8289-98acde9e6e04-clustermesh-secrets\") pod \"354a3d62-5ab7-4b54-8289-98acde9e6e04\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " Apr 30 03:25:54.401062 kubelet[2501]: I0430 03:25:54.400758 2501 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-hostproc\") pod \"354a3d62-5ab7-4b54-8289-98acde9e6e04\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " Apr 30 03:25:54.401219 kubelet[2501]: I0430 03:25:54.400776 2501 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-xtables-lock\") pod \"354a3d62-5ab7-4b54-8289-98acde9e6e04\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " Apr 30 03:25:54.401219 kubelet[2501]: I0430 03:25:54.400795 2501 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-lib-modules\") pod \"354a3d62-5ab7-4b54-8289-98acde9e6e04\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " Apr 30 03:25:54.401219 kubelet[2501]: I0430 03:25:54.400819 2501 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-host-proc-sys-kernel\") pod \"354a3d62-5ab7-4b54-8289-98acde9e6e04\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " Apr 30 03:25:54.401219 kubelet[2501]: I0430 03:25:54.400999 2501 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/354a3d62-5ab7-4b54-8289-98acde9e6e04-hubble-tls\") pod \"354a3d62-5ab7-4b54-8289-98acde9e6e04\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " Apr 30 03:25:54.401219 kubelet[2501]: I0430 03:25:54.401020 2501 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-cilium-run\") pod \"354a3d62-5ab7-4b54-8289-98acde9e6e04\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " Apr 30 03:25:54.401219 kubelet[2501]: I0430 03:25:54.401044 2501 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-cni-path\") pod \"354a3d62-5ab7-4b54-8289-98acde9e6e04\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " Apr 30 03:25:54.401366 kubelet[2501]: I0430 03:25:54.401068 2501 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/354a3d62-5ab7-4b54-8289-98acde9e6e04-cilium-config-path\") pod \"354a3d62-5ab7-4b54-8289-98acde9e6e04\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " Apr 30 03:25:54.401366 kubelet[2501]: I0430 03:25:54.401088 2501 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-bpf-maps\") pod \"354a3d62-5ab7-4b54-8289-98acde9e6e04\" (UID: \"354a3d62-5ab7-4b54-8289-98acde9e6e04\") " Apr 30 03:25:54.401458 kubelet[2501]: I0430 03:25:54.400782 2501 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "354a3d62-5ab7-4b54-8289-98acde9e6e04" (UID: "354a3d62-5ab7-4b54-8289-98acde9e6e04"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 03:25:54.401496 kubelet[2501]: I0430 03:25:54.401152 2501 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "354a3d62-5ab7-4b54-8289-98acde9e6e04" (UID: "354a3d62-5ab7-4b54-8289-98acde9e6e04"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 03:25:54.401496 kubelet[2501]: I0430 03:25:54.401379 2501 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "354a3d62-5ab7-4b54-8289-98acde9e6e04" (UID: "354a3d62-5ab7-4b54-8289-98acde9e6e04"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 03:25:54.401496 kubelet[2501]: I0430 03:25:54.401397 2501 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-hostproc" (OuterVolumeSpecName: "hostproc") pod "354a3d62-5ab7-4b54-8289-98acde9e6e04" (UID: "354a3d62-5ab7-4b54-8289-98acde9e6e04"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 03:25:54.401496 kubelet[2501]: I0430 03:25:54.401411 2501 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "354a3d62-5ab7-4b54-8289-98acde9e6e04" (UID: "354a3d62-5ab7-4b54-8289-98acde9e6e04"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 03:25:54.401496 kubelet[2501]: I0430 03:25:54.401423 2501 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "354a3d62-5ab7-4b54-8289-98acde9e6e04" (UID: "354a3d62-5ab7-4b54-8289-98acde9e6e04"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 03:25:54.401676 kubelet[2501]: I0430 03:25:54.401494 2501 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "354a3d62-5ab7-4b54-8289-98acde9e6e04" (UID: "354a3d62-5ab7-4b54-8289-98acde9e6e04"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 03:25:54.405075 kubelet[2501]: I0430 03:25:54.405039 2501 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-cni-path" (OuterVolumeSpecName: "cni-path") pod "354a3d62-5ab7-4b54-8289-98acde9e6e04" (UID: "354a3d62-5ab7-4b54-8289-98acde9e6e04"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 03:25:54.405700 kubelet[2501]: I0430 03:25:54.405192 2501 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "354a3d62-5ab7-4b54-8289-98acde9e6e04" (UID: "354a3d62-5ab7-4b54-8289-98acde9e6e04"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 03:25:54.405700 kubelet[2501]: I0430 03:25:54.405450 2501 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bccd8a08-42f5-478c-a2d0-833e44ac5978-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bccd8a08-42f5-478c-a2d0-833e44ac5978" (UID: "bccd8a08-42f5-478c-a2d0-833e44ac5978"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 30 03:25:54.405700 kubelet[2501]: I0430 03:25:54.405585 2501 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "354a3d62-5ab7-4b54-8289-98acde9e6e04" (UID: "354a3d62-5ab7-4b54-8289-98acde9e6e04"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 03:25:54.406587 kubelet[2501]: I0430 03:25:54.406441 2501 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/354a3d62-5ab7-4b54-8289-98acde9e6e04-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "354a3d62-5ab7-4b54-8289-98acde9e6e04" (UID: "354a3d62-5ab7-4b54-8289-98acde9e6e04"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 30 03:25:54.406869 kubelet[2501]: I0430 03:25:54.406791 2501 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/354a3d62-5ab7-4b54-8289-98acde9e6e04-kube-api-access-88cwf" (OuterVolumeSpecName: "kube-api-access-88cwf") pod "354a3d62-5ab7-4b54-8289-98acde9e6e04" (UID: "354a3d62-5ab7-4b54-8289-98acde9e6e04"). InnerVolumeSpecName "kube-api-access-88cwf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 03:25:54.406869 kubelet[2501]: I0430 03:25:54.406853 2501 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/354a3d62-5ab7-4b54-8289-98acde9e6e04-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "354a3d62-5ab7-4b54-8289-98acde9e6e04" (UID: "354a3d62-5ab7-4b54-8289-98acde9e6e04"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 03:25:54.409034 kubelet[2501]: I0430 03:25:54.408999 2501 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bccd8a08-42f5-478c-a2d0-833e44ac5978-kube-api-access-h2bfj" (OuterVolumeSpecName: "kube-api-access-h2bfj") pod "bccd8a08-42f5-478c-a2d0-833e44ac5978" (UID: "bccd8a08-42f5-478c-a2d0-833e44ac5978"). InnerVolumeSpecName "kube-api-access-h2bfj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 03:25:54.409324 kubelet[2501]: I0430 03:25:54.409300 2501 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/354a3d62-5ab7-4b54-8289-98acde9e6e04-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "354a3d62-5ab7-4b54-8289-98acde9e6e04" (UID: "354a3d62-5ab7-4b54-8289-98acde9e6e04"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 30 03:25:54.444804 kubelet[2501]: I0430 03:25:54.444751 2501 scope.go:117] "RemoveContainer" containerID="3c914634291114a1cedd479875c7aef6e6fe634d8b4663d55b64f428d0582e21" Apr 30 03:25:54.448679 containerd[1466]: time="2025-04-30T03:25:54.448627659Z" level=info msg="RemoveContainer for \"3c914634291114a1cedd479875c7aef6e6fe634d8b4663d55b64f428d0582e21\"" Apr 30 03:25:54.452655 systemd[1]: Removed slice kubepods-besteffort-podbccd8a08_42f5_478c_a2d0_833e44ac5978.slice - libcontainer container kubepods-besteffort-podbccd8a08_42f5_478c_a2d0_833e44ac5978.slice. Apr 30 03:25:54.459420 systemd[1]: Removed slice kubepods-burstable-pod354a3d62_5ab7_4b54_8289_98acde9e6e04.slice - libcontainer container kubepods-burstable-pod354a3d62_5ab7_4b54_8289_98acde9e6e04.slice. Apr 30 03:25:54.460409 kubelet[2501]: I0430 03:25:54.460222 2501 scope.go:117] "RemoveContainer" containerID="3c914634291114a1cedd479875c7aef6e6fe634d8b4663d55b64f428d0582e21" Apr 30 03:25:54.460467 containerd[1466]: time="2025-04-30T03:25:54.459931256Z" level=info msg="RemoveContainer for \"3c914634291114a1cedd479875c7aef6e6fe634d8b4663d55b64f428d0582e21\" returns successfully" Apr 30 03:25:54.459516 systemd[1]: kubepods-burstable-pod354a3d62_5ab7_4b54_8289_98acde9e6e04.slice: Consumed 7.789s CPU time. Apr 30 03:25:54.464033 containerd[1466]: time="2025-04-30T03:25:54.463960865Z" level=error msg="ContainerStatus for \"3c914634291114a1cedd479875c7aef6e6fe634d8b4663d55b64f428d0582e21\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3c914634291114a1cedd479875c7aef6e6fe634d8b4663d55b64f428d0582e21\": not found" Apr 30 03:25:54.476763 kubelet[2501]: E0430 03:25:54.476355 2501 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3c914634291114a1cedd479875c7aef6e6fe634d8b4663d55b64f428d0582e21\": not found" containerID="3c914634291114a1cedd479875c7aef6e6fe634d8b4663d55b64f428d0582e21" Apr 30 03:25:54.476763 kubelet[2501]: I0430 03:25:54.476419 2501 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3c914634291114a1cedd479875c7aef6e6fe634d8b4663d55b64f428d0582e21"} err="failed to get container status \"3c914634291114a1cedd479875c7aef6e6fe634d8b4663d55b64f428d0582e21\": rpc error: code = NotFound desc = an error occurred when try to find container \"3c914634291114a1cedd479875c7aef6e6fe634d8b4663d55b64f428d0582e21\": not found" Apr 30 03:25:54.476763 kubelet[2501]: I0430 03:25:54.476511 2501 scope.go:117] "RemoveContainer" containerID="af44bad55ad745b041a5482e14250aea960afbc939ee53f36da3f004f8b57ec0" Apr 30 03:25:54.478599 containerd[1466]: time="2025-04-30T03:25:54.478265326Z" level=info msg="RemoveContainer for \"af44bad55ad745b041a5482e14250aea960afbc939ee53f36da3f004f8b57ec0\"" Apr 30 03:25:54.483224 containerd[1466]: time="2025-04-30T03:25:54.483171140Z" level=info msg="RemoveContainer for \"af44bad55ad745b041a5482e14250aea960afbc939ee53f36da3f004f8b57ec0\" returns successfully" Apr 30 03:25:54.483501 kubelet[2501]: I0430 03:25:54.483465 2501 scope.go:117] "RemoveContainer" containerID="67670762fb1a2e279ba2a715a472f7a376ff19c424c2546354a9260234c25e8e" Apr 30 03:25:54.484586 containerd[1466]: time="2025-04-30T03:25:54.484557907Z" level=info msg="RemoveContainer for \"67670762fb1a2e279ba2a715a472f7a376ff19c424c2546354a9260234c25e8e\"" Apr 30 03:25:54.488766 containerd[1466]: time="2025-04-30T03:25:54.488726931Z" level=info msg="RemoveContainer for \"67670762fb1a2e279ba2a715a472f7a376ff19c424c2546354a9260234c25e8e\" returns successfully" Apr 30 03:25:54.488965 kubelet[2501]: I0430 03:25:54.488892 2501 scope.go:117] "RemoveContainer" containerID="6fc1d65ee494c31bae33baf62903d8c6a279b6277bcff957ad35a13969ab3d0e" Apr 30 03:25:54.490693 containerd[1466]: time="2025-04-30T03:25:54.490397457Z" level=info msg="RemoveContainer for \"6fc1d65ee494c31bae33baf62903d8c6a279b6277bcff957ad35a13969ab3d0e\"" Apr 30 03:25:54.502111 kubelet[2501]: I0430 03:25:54.501957 2501 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 30 03:25:54.502111 kubelet[2501]: I0430 03:25:54.501987 2501 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/354a3d62-5ab7-4b54-8289-98acde9e6e04-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 30 03:25:54.502111 kubelet[2501]: I0430 03:25:54.502000 2501 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 30 03:25:54.502111 kubelet[2501]: I0430 03:25:54.502011 2501 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 30 03:25:54.502111 kubelet[2501]: I0430 03:25:54.502024 2501 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/354a3d62-5ab7-4b54-8289-98acde9e6e04-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 30 03:25:54.502111 kubelet[2501]: I0430 03:25:54.502033 2501 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 30 03:25:54.502111 kubelet[2501]: I0430 03:25:54.502044 2501 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bccd8a08-42f5-478c-a2d0-833e44ac5978-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 30 03:25:54.502111 kubelet[2501]: I0430 03:25:54.502053 2501 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 30 03:25:54.502576 kubelet[2501]: I0430 03:25:54.502063 2501 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 30 03:25:54.502576 kubelet[2501]: I0430 03:25:54.502073 2501 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-88cwf\" (UniqueName: \"kubernetes.io/projected/354a3d62-5ab7-4b54-8289-98acde9e6e04-kube-api-access-88cwf\") on node \"localhost\" DevicePath \"\"" Apr 30 03:25:54.502576 kubelet[2501]: I0430 03:25:54.502100 2501 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h2bfj\" (UniqueName: \"kubernetes.io/projected/bccd8a08-42f5-478c-a2d0-833e44ac5978-kube-api-access-h2bfj\") on node \"localhost\" DevicePath \"\"" Apr 30 03:25:54.502576 kubelet[2501]: I0430 03:25:54.502115 2501 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/354a3d62-5ab7-4b54-8289-98acde9e6e04-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 30 03:25:54.502576 kubelet[2501]: I0430 03:25:54.502126 2501 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 30 03:25:54.502576 kubelet[2501]: I0430 03:25:54.502136 2501 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 30 03:25:54.502576 kubelet[2501]: I0430 03:25:54.502146 2501 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 30 03:25:54.502576 kubelet[2501]: I0430 03:25:54.502156 2501 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/354a3d62-5ab7-4b54-8289-98acde9e6e04-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 30 03:25:54.518139 containerd[1466]: time="2025-04-30T03:25:54.518068264Z" level=info msg="RemoveContainer for \"6fc1d65ee494c31bae33baf62903d8c6a279b6277bcff957ad35a13969ab3d0e\" returns successfully" Apr 30 03:25:54.518470 kubelet[2501]: I0430 03:25:54.518334 2501 scope.go:117] "RemoveContainer" containerID="f83bb9ce0efdb19da509d34dcd446812e7a42351407a0e7a9a983a214f05fc3d" Apr 30 03:25:54.519740 containerd[1466]: time="2025-04-30T03:25:54.519709254Z" level=info msg="RemoveContainer for \"f83bb9ce0efdb19da509d34dcd446812e7a42351407a0e7a9a983a214f05fc3d\"" Apr 30 03:25:54.523676 containerd[1466]: time="2025-04-30T03:25:54.523598867Z" level=info msg="RemoveContainer for \"f83bb9ce0efdb19da509d34dcd446812e7a42351407a0e7a9a983a214f05fc3d\" returns successfully" Apr 30 03:25:54.524014 kubelet[2501]: I0430 03:25:54.523784 2501 scope.go:117] "RemoveContainer" containerID="e72ec6e7b583794bb1dd688d8a55e0fba633c724da61cfed6d7114a1d9e37977" Apr 30 03:25:54.525043 containerd[1466]: time="2025-04-30T03:25:54.525013987Z" level=info msg="RemoveContainer for \"e72ec6e7b583794bb1dd688d8a55e0fba633c724da61cfed6d7114a1d9e37977\"" Apr 30 03:25:54.528600 containerd[1466]: time="2025-04-30T03:25:54.528530972Z" level=info msg="RemoveContainer for \"e72ec6e7b583794bb1dd688d8a55e0fba633c724da61cfed6d7114a1d9e37977\" returns successfully" Apr 30 03:25:54.528711 kubelet[2501]: I0430 03:25:54.528695 2501 scope.go:117] "RemoveContainer" containerID="af44bad55ad745b041a5482e14250aea960afbc939ee53f36da3f004f8b57ec0" Apr 30 03:25:54.528889 containerd[1466]: time="2025-04-30T03:25:54.528848997Z" level=error msg="ContainerStatus for \"af44bad55ad745b041a5482e14250aea960afbc939ee53f36da3f004f8b57ec0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"af44bad55ad745b041a5482e14250aea960afbc939ee53f36da3f004f8b57ec0\": not found" Apr 30 03:25:54.529065 kubelet[2501]: E0430 03:25:54.529014 2501 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"af44bad55ad745b041a5482e14250aea960afbc939ee53f36da3f004f8b57ec0\": not found" containerID="af44bad55ad745b041a5482e14250aea960afbc939ee53f36da3f004f8b57ec0" Apr 30 03:25:54.529141 kubelet[2501]: I0430 03:25:54.529060 2501 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"af44bad55ad745b041a5482e14250aea960afbc939ee53f36da3f004f8b57ec0"} err="failed to get container status \"af44bad55ad745b041a5482e14250aea960afbc939ee53f36da3f004f8b57ec0\": rpc error: code = NotFound desc = an error occurred when try to find container \"af44bad55ad745b041a5482e14250aea960afbc939ee53f36da3f004f8b57ec0\": not found" Apr 30 03:25:54.529141 kubelet[2501]: I0430 03:25:54.529084 2501 scope.go:117] "RemoveContainer" containerID="67670762fb1a2e279ba2a715a472f7a376ff19c424c2546354a9260234c25e8e" Apr 30 03:25:54.529310 containerd[1466]: time="2025-04-30T03:25:54.529251893Z" level=error msg="ContainerStatus for \"67670762fb1a2e279ba2a715a472f7a376ff19c424c2546354a9260234c25e8e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"67670762fb1a2e279ba2a715a472f7a376ff19c424c2546354a9260234c25e8e\": not found" Apr 30 03:25:54.529416 kubelet[2501]: E0430 03:25:54.529388 2501 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"67670762fb1a2e279ba2a715a472f7a376ff19c424c2546354a9260234c25e8e\": not found" containerID="67670762fb1a2e279ba2a715a472f7a376ff19c424c2546354a9260234c25e8e" Apr 30 03:25:54.529462 kubelet[2501]: I0430 03:25:54.529416 2501 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"67670762fb1a2e279ba2a715a472f7a376ff19c424c2546354a9260234c25e8e"} err="failed to get container status \"67670762fb1a2e279ba2a715a472f7a376ff19c424c2546354a9260234c25e8e\": rpc error: code = NotFound desc = an error occurred when try to find container \"67670762fb1a2e279ba2a715a472f7a376ff19c424c2546354a9260234c25e8e\": not found" Apr 30 03:25:54.529462 kubelet[2501]: I0430 03:25:54.529437 2501 scope.go:117] "RemoveContainer" containerID="6fc1d65ee494c31bae33baf62903d8c6a279b6277bcff957ad35a13969ab3d0e" Apr 30 03:25:54.529665 containerd[1466]: time="2025-04-30T03:25:54.529630832Z" level=error msg="ContainerStatus for \"6fc1d65ee494c31bae33baf62903d8c6a279b6277bcff957ad35a13969ab3d0e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6fc1d65ee494c31bae33baf62903d8c6a279b6277bcff957ad35a13969ab3d0e\": not found" Apr 30 03:25:54.529765 kubelet[2501]: E0430 03:25:54.529738 2501 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6fc1d65ee494c31bae33baf62903d8c6a279b6277bcff957ad35a13969ab3d0e\": not found" containerID="6fc1d65ee494c31bae33baf62903d8c6a279b6277bcff957ad35a13969ab3d0e" Apr 30 03:25:54.529809 kubelet[2501]: I0430 03:25:54.529768 2501 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6fc1d65ee494c31bae33baf62903d8c6a279b6277bcff957ad35a13969ab3d0e"} err="failed to get container status \"6fc1d65ee494c31bae33baf62903d8c6a279b6277bcff957ad35a13969ab3d0e\": rpc error: code = NotFound desc = an error occurred when try to find container \"6fc1d65ee494c31bae33baf62903d8c6a279b6277bcff957ad35a13969ab3d0e\": not found" Apr 30 03:25:54.529809 kubelet[2501]: I0430 03:25:54.529786 2501 scope.go:117] "RemoveContainer" containerID="f83bb9ce0efdb19da509d34dcd446812e7a42351407a0e7a9a983a214f05fc3d" Apr 30 03:25:54.530020 containerd[1466]: time="2025-04-30T03:25:54.529987590Z" level=error msg="ContainerStatus for \"f83bb9ce0efdb19da509d34dcd446812e7a42351407a0e7a9a983a214f05fc3d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f83bb9ce0efdb19da509d34dcd446812e7a42351407a0e7a9a983a214f05fc3d\": not found" Apr 30 03:25:54.530122 kubelet[2501]: E0430 03:25:54.530100 2501 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f83bb9ce0efdb19da509d34dcd446812e7a42351407a0e7a9a983a214f05fc3d\": not found" containerID="f83bb9ce0efdb19da509d34dcd446812e7a42351407a0e7a9a983a214f05fc3d" Apr 30 03:25:54.530165 kubelet[2501]: I0430 03:25:54.530124 2501 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f83bb9ce0efdb19da509d34dcd446812e7a42351407a0e7a9a983a214f05fc3d"} err="failed to get container status \"f83bb9ce0efdb19da509d34dcd446812e7a42351407a0e7a9a983a214f05fc3d\": rpc error: code = NotFound desc = an error occurred when try to find container \"f83bb9ce0efdb19da509d34dcd446812e7a42351407a0e7a9a983a214f05fc3d\": not found" Apr 30 03:25:54.530165 kubelet[2501]: I0430 03:25:54.530140 2501 scope.go:117] "RemoveContainer" containerID="e72ec6e7b583794bb1dd688d8a55e0fba633c724da61cfed6d7114a1d9e37977" Apr 30 03:25:54.530340 containerd[1466]: time="2025-04-30T03:25:54.530306376Z" level=error msg="ContainerStatus for \"e72ec6e7b583794bb1dd688d8a55e0fba633c724da61cfed6d7114a1d9e37977\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e72ec6e7b583794bb1dd688d8a55e0fba633c724da61cfed6d7114a1d9e37977\": not found" Apr 30 03:25:54.530461 kubelet[2501]: E0430 03:25:54.530438 2501 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e72ec6e7b583794bb1dd688d8a55e0fba633c724da61cfed6d7114a1d9e37977\": not found" containerID="e72ec6e7b583794bb1dd688d8a55e0fba633c724da61cfed6d7114a1d9e37977" Apr 30 03:25:54.530519 kubelet[2501]: I0430 03:25:54.530464 2501 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e72ec6e7b583794bb1dd688d8a55e0fba633c724da61cfed6d7114a1d9e37977"} err="failed to get container status \"e72ec6e7b583794bb1dd688d8a55e0fba633c724da61cfed6d7114a1d9e37977\": rpc error: code = NotFound desc = an error occurred when try to find container \"e72ec6e7b583794bb1dd688d8a55e0fba633c724da61cfed6d7114a1d9e37977\": not found" Apr 30 03:25:54.953525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d7afcc93770fd8cf41b417715814f4d48adffb8922444053ae6a3932f4822d8-rootfs.mount: Deactivated successfully. Apr 30 03:25:54.953665 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46c0e8877e742aa70a64b2f5f73bf1287af992669af76b1f2793e8e2134efaf7-rootfs.mount: Deactivated successfully. Apr 30 03:25:54.953744 systemd[1]: var-lib-kubelet-pods-bccd8a08\x2d42f5\x2d478c\x2da2d0\x2d833e44ac5978-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh2bfj.mount: Deactivated successfully. Apr 30 03:25:54.953835 systemd[1]: var-lib-kubelet-pods-354a3d62\x2d5ab7\x2d4b54\x2d8289\x2d98acde9e6e04-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 03:25:54.953935 systemd[1]: var-lib-kubelet-pods-354a3d62\x2d5ab7\x2d4b54\x2d8289\x2d98acde9e6e04-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 03:25:54.954017 systemd[1]: var-lib-kubelet-pods-354a3d62\x2d5ab7\x2d4b54\x2d8289\x2d98acde9e6e04-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d88cwf.mount: Deactivated successfully. Apr 30 03:25:55.229903 kubelet[2501]: I0430 03:25:55.229741 2501 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="354a3d62-5ab7-4b54-8289-98acde9e6e04" path="/var/lib/kubelet/pods/354a3d62-5ab7-4b54-8289-98acde9e6e04/volumes" Apr 30 03:25:55.230741 kubelet[2501]: I0430 03:25:55.230704 2501 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bccd8a08-42f5-478c-a2d0-833e44ac5978" path="/var/lib/kubelet/pods/bccd8a08-42f5-478c-a2d0-833e44ac5978/volumes" Apr 30 03:25:55.909214 sshd[4200]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:55.923353 systemd[1]: sshd@26-10.0.0.59:22-10.0.0.1:55310.service: Deactivated successfully. Apr 30 03:25:55.925939 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 03:25:55.927694 systemd-logind[1446]: Session 27 logged out. Waiting for processes to exit. Apr 30 03:25:55.936399 systemd[1]: Started sshd@27-10.0.0.59:22-10.0.0.1:55324.service - OpenSSH per-connection server daemon (10.0.0.1:55324). Apr 30 03:25:55.937609 systemd-logind[1446]: Removed session 27. Apr 30 03:25:55.977155 sshd[4365]: Accepted publickey for core from 10.0.0.1 port 55324 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:25:55.979187 sshd[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:55.984188 systemd-logind[1446]: New session 28 of user core. Apr 30 03:25:55.992079 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 30 03:25:56.650820 sshd[4365]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:56.664455 systemd[1]: sshd@27-10.0.0.59:22-10.0.0.1:55324.service: Deactivated successfully. Apr 30 03:25:56.667569 systemd[1]: session-28.scope: Deactivated successfully. Apr 30 03:25:56.669983 systemd-logind[1446]: Session 28 logged out. Waiting for processes to exit. Apr 30 03:25:56.676522 systemd[1]: Started sshd@28-10.0.0.59:22-10.0.0.1:37132.service - OpenSSH per-connection server daemon (10.0.0.1:37132). Apr 30 03:25:56.678228 systemd-logind[1446]: Removed session 28. Apr 30 03:25:56.712030 sshd[4378]: Accepted publickey for core from 10.0.0.1 port 37132 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:25:56.714170 sshd[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:56.722456 systemd-logind[1446]: New session 29 of user core. Apr 30 03:25:56.726519 kubelet[2501]: I0430 03:25:56.724177 2501 memory_manager.go:355] "RemoveStaleState removing state" podUID="354a3d62-5ab7-4b54-8289-98acde9e6e04" containerName="cilium-agent" Apr 30 03:25:56.726519 kubelet[2501]: I0430 03:25:56.724211 2501 memory_manager.go:355] "RemoveStaleState removing state" podUID="bccd8a08-42f5-478c-a2d0-833e44ac5978" containerName="cilium-operator" Apr 30 03:25:56.732475 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 30 03:25:56.748782 systemd[1]: Created slice kubepods-burstable-pod368e9981_24a7_4c00_8376_d7f38d4e5c09.slice - libcontainer container kubepods-burstable-pod368e9981_24a7_4c00_8376_d7f38d4e5c09.slice. Apr 30 03:25:56.797931 sshd[4378]: pam_unix(sshd:session): session closed for user core Apr 30 03:25:56.816689 kubelet[2501]: I0430 03:25:56.816627 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/368e9981-24a7-4c00-8376-d7f38d4e5c09-etc-cni-netd\") pod \"cilium-dxdtl\" (UID: \"368e9981-24a7-4c00-8376-d7f38d4e5c09\") " pod="kube-system/cilium-dxdtl" Apr 30 03:25:56.816689 kubelet[2501]: I0430 03:25:56.816671 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/368e9981-24a7-4c00-8376-d7f38d4e5c09-cilium-config-path\") pod \"cilium-dxdtl\" (UID: \"368e9981-24a7-4c00-8376-d7f38d4e5c09\") " pod="kube-system/cilium-dxdtl" Apr 30 03:25:56.816689 kubelet[2501]: I0430 03:25:56.816691 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/368e9981-24a7-4c00-8376-d7f38d4e5c09-bpf-maps\") pod \"cilium-dxdtl\" (UID: \"368e9981-24a7-4c00-8376-d7f38d4e5c09\") " pod="kube-system/cilium-dxdtl" Apr 30 03:25:56.816908 kubelet[2501]: I0430 03:25:56.816711 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/368e9981-24a7-4c00-8376-d7f38d4e5c09-cilium-cgroup\") pod \"cilium-dxdtl\" (UID: \"368e9981-24a7-4c00-8376-d7f38d4e5c09\") " pod="kube-system/cilium-dxdtl" Apr 30 03:25:56.816908 kubelet[2501]: I0430 03:25:56.816732 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/368e9981-24a7-4c00-8376-d7f38d4e5c09-cilium-ipsec-secrets\") pod \"cilium-dxdtl\" (UID: \"368e9981-24a7-4c00-8376-d7f38d4e5c09\") " pod="kube-system/cilium-dxdtl" Apr 30 03:25:56.816908 kubelet[2501]: I0430 03:25:56.816747 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/368e9981-24a7-4c00-8376-d7f38d4e5c09-cni-path\") pod \"cilium-dxdtl\" (UID: \"368e9981-24a7-4c00-8376-d7f38d4e5c09\") " pod="kube-system/cilium-dxdtl" Apr 30 03:25:56.816908 kubelet[2501]: I0430 03:25:56.816798 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/368e9981-24a7-4c00-8376-d7f38d4e5c09-lib-modules\") pod \"cilium-dxdtl\" (UID: \"368e9981-24a7-4c00-8376-d7f38d4e5c09\") " pod="kube-system/cilium-dxdtl" Apr 30 03:25:56.816908 kubelet[2501]: I0430 03:25:56.816814 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/368e9981-24a7-4c00-8376-d7f38d4e5c09-xtables-lock\") pod \"cilium-dxdtl\" (UID: \"368e9981-24a7-4c00-8376-d7f38d4e5c09\") " pod="kube-system/cilium-dxdtl" Apr 30 03:25:56.816908 kubelet[2501]: I0430 03:25:56.816829 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/368e9981-24a7-4c00-8376-d7f38d4e5c09-clustermesh-secrets\") pod \"cilium-dxdtl\" (UID: \"368e9981-24a7-4c00-8376-d7f38d4e5c09\") " pod="kube-system/cilium-dxdtl" Apr 30 03:25:56.817073 kubelet[2501]: I0430 03:25:56.816845 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt8dr\" (UniqueName: \"kubernetes.io/projected/368e9981-24a7-4c00-8376-d7f38d4e5c09-kube-api-access-jt8dr\") pod \"cilium-dxdtl\" (UID: \"368e9981-24a7-4c00-8376-d7f38d4e5c09\") " pod="kube-system/cilium-dxdtl" Apr 30 03:25:56.817073 kubelet[2501]: I0430 03:25:56.816861 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/368e9981-24a7-4c00-8376-d7f38d4e5c09-host-proc-sys-kernel\") pod \"cilium-dxdtl\" (UID: \"368e9981-24a7-4c00-8376-d7f38d4e5c09\") " pod="kube-system/cilium-dxdtl" Apr 30 03:25:56.817073 kubelet[2501]: I0430 03:25:56.816881 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/368e9981-24a7-4c00-8376-d7f38d4e5c09-cilium-run\") pod \"cilium-dxdtl\" (UID: \"368e9981-24a7-4c00-8376-d7f38d4e5c09\") " pod="kube-system/cilium-dxdtl" Apr 30 03:25:56.817073 kubelet[2501]: I0430 03:25:56.816895 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/368e9981-24a7-4c00-8376-d7f38d4e5c09-hostproc\") pod \"cilium-dxdtl\" (UID: \"368e9981-24a7-4c00-8376-d7f38d4e5c09\") " pod="kube-system/cilium-dxdtl" Apr 30 03:25:56.817073 kubelet[2501]: I0430 03:25:56.816911 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/368e9981-24a7-4c00-8376-d7f38d4e5c09-host-proc-sys-net\") pod \"cilium-dxdtl\" (UID: \"368e9981-24a7-4c00-8376-d7f38d4e5c09\") " pod="kube-system/cilium-dxdtl" Apr 30 03:25:56.817073 kubelet[2501]: I0430 03:25:56.816952 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/368e9981-24a7-4c00-8376-d7f38d4e5c09-hubble-tls\") pod \"cilium-dxdtl\" (UID: \"368e9981-24a7-4c00-8376-d7f38d4e5c09\") " pod="kube-system/cilium-dxdtl" Apr 30 03:25:56.819017 systemd[1]: sshd@28-10.0.0.59:22-10.0.0.1:37132.service: Deactivated successfully. Apr 30 03:25:56.821908 systemd[1]: session-29.scope: Deactivated successfully. Apr 30 03:25:56.824221 systemd-logind[1446]: Session 29 logged out. Waiting for processes to exit. Apr 30 03:25:56.830358 systemd[1]: Started sshd@29-10.0.0.59:22-10.0.0.1:37138.service - OpenSSH per-connection server daemon (10.0.0.1:37138). Apr 30 03:25:56.831666 systemd-logind[1446]: Removed session 29. Apr 30 03:25:56.864296 sshd[4386]: Accepted publickey for core from 10.0.0.1 port 37138 ssh2: RSA SHA256:jIf8HFdP0RMm9rerXskb2qP8CDRFmcx9BTiKlIsr66k Apr 30 03:25:56.866169 sshd[4386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:25:56.870856 systemd-logind[1446]: New session 30 of user core. Apr 30 03:25:56.882143 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 30 03:25:57.053465 kubelet[2501]: E0430 03:25:57.053404 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:57.054156 containerd[1466]: time="2025-04-30T03:25:57.054059221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dxdtl,Uid:368e9981-24a7-4c00-8376-d7f38d4e5c09,Namespace:kube-system,Attempt:0,}" Apr 30 03:25:57.221573 kubelet[2501]: E0430 03:25:57.221505 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:57.270375 containerd[1466]: time="2025-04-30T03:25:57.270065163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:25:57.270375 containerd[1466]: time="2025-04-30T03:25:57.270141288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:25:57.270375 containerd[1466]: time="2025-04-30T03:25:57.270166296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:25:57.270375 containerd[1466]: time="2025-04-30T03:25:57.270284241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:25:57.293072 systemd[1]: Started cri-containerd-4ec47d8526522998e7460e757e5a1232d51f75f09532069ff726e2986ef743ee.scope - libcontainer container 4ec47d8526522998e7460e757e5a1232d51f75f09532069ff726e2986ef743ee. Apr 30 03:25:57.316575 containerd[1466]: time="2025-04-30T03:25:57.316250860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dxdtl,Uid:368e9981-24a7-4c00-8376-d7f38d4e5c09,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ec47d8526522998e7460e757e5a1232d51f75f09532069ff726e2986ef743ee\"" Apr 30 03:25:57.317119 kubelet[2501]: E0430 03:25:57.317085 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:57.318897 containerd[1466]: time="2025-04-30T03:25:57.318860539Z" level=info msg="CreateContainer within sandbox \"4ec47d8526522998e7460e757e5a1232d51f75f09532069ff726e2986ef743ee\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 03:25:57.455061 containerd[1466]: time="2025-04-30T03:25:57.454967738Z" level=info msg="CreateContainer within sandbox \"4ec47d8526522998e7460e757e5a1232d51f75f09532069ff726e2986ef743ee\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b586a488afcb4874a5d63e326c028f8336b62fb9ada27b3e7f1e8e0e044023cf\"" Apr 30 03:25:57.455767 containerd[1466]: time="2025-04-30T03:25:57.455712749Z" level=info msg="StartContainer for \"b586a488afcb4874a5d63e326c028f8336b62fb9ada27b3e7f1e8e0e044023cf\"" Apr 30 03:25:57.488130 systemd[1]: Started cri-containerd-b586a488afcb4874a5d63e326c028f8336b62fb9ada27b3e7f1e8e0e044023cf.scope - libcontainer container b586a488afcb4874a5d63e326c028f8336b62fb9ada27b3e7f1e8e0e044023cf. Apr 30 03:25:57.529272 systemd[1]: cri-containerd-b586a488afcb4874a5d63e326c028f8336b62fb9ada27b3e7f1e8e0e044023cf.scope: Deactivated successfully. Apr 30 03:25:57.650228 containerd[1466]: time="2025-04-30T03:25:57.649725214Z" level=info msg="StartContainer for \"b586a488afcb4874a5d63e326c028f8336b62fb9ada27b3e7f1e8e0e044023cf\" returns successfully" Apr 30 03:25:57.788640 containerd[1466]: time="2025-04-30T03:25:57.788509821Z" level=info msg="shim disconnected" id=b586a488afcb4874a5d63e326c028f8336b62fb9ada27b3e7f1e8e0e044023cf namespace=k8s.io Apr 30 03:25:57.788640 containerd[1466]: time="2025-04-30T03:25:57.788591076Z" level=warning msg="cleaning up after shim disconnected" id=b586a488afcb4874a5d63e326c028f8336b62fb9ada27b3e7f1e8e0e044023cf namespace=k8s.io Apr 30 03:25:57.788640 containerd[1466]: time="2025-04-30T03:25:57.788601866Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:25:58.471369 kubelet[2501]: E0430 03:25:58.471308 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:58.496560 containerd[1466]: time="2025-04-30T03:25:58.496478122Z" level=info msg="CreateContainer within sandbox \"4ec47d8526522998e7460e757e5a1232d51f75f09532069ff726e2986ef743ee\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 03:25:58.514807 containerd[1466]: time="2025-04-30T03:25:58.514734539Z" level=info msg="CreateContainer within sandbox \"4ec47d8526522998e7460e757e5a1232d51f75f09532069ff726e2986ef743ee\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"89d9421c5a83ceb00590a023fa1df5aea473b03b2c3d7cdc46665115a9b5d77e\"" Apr 30 03:25:58.515692 containerd[1466]: time="2025-04-30T03:25:58.515640459Z" level=info msg="StartContainer for \"89d9421c5a83ceb00590a023fa1df5aea473b03b2c3d7cdc46665115a9b5d77e\"" Apr 30 03:25:58.547110 systemd[1]: Started cri-containerd-89d9421c5a83ceb00590a023fa1df5aea473b03b2c3d7cdc46665115a9b5d77e.scope - libcontainer container 89d9421c5a83ceb00590a023fa1df5aea473b03b2c3d7cdc46665115a9b5d77e. Apr 30 03:25:58.576637 containerd[1466]: time="2025-04-30T03:25:58.576463580Z" level=info msg="StartContainer for \"89d9421c5a83ceb00590a023fa1df5aea473b03b2c3d7cdc46665115a9b5d77e\" returns successfully" Apr 30 03:25:58.586676 systemd[1]: cri-containerd-89d9421c5a83ceb00590a023fa1df5aea473b03b2c3d7cdc46665115a9b5d77e.scope: Deactivated successfully. Apr 30 03:25:58.616879 containerd[1466]: time="2025-04-30T03:25:58.616781731Z" level=info msg="shim disconnected" id=89d9421c5a83ceb00590a023fa1df5aea473b03b2c3d7cdc46665115a9b5d77e namespace=k8s.io Apr 30 03:25:58.616879 containerd[1466]: time="2025-04-30T03:25:58.616866593Z" level=warning msg="cleaning up after shim disconnected" id=89d9421c5a83ceb00590a023fa1df5aea473b03b2c3d7cdc46665115a9b5d77e namespace=k8s.io Apr 30 03:25:58.616879 containerd[1466]: time="2025-04-30T03:25:58.616878707Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:25:58.924134 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89d9421c5a83ceb00590a023fa1df5aea473b03b2c3d7cdc46665115a9b5d77e-rootfs.mount: Deactivated successfully. Apr 30 03:25:59.284302 kubelet[2501]: E0430 03:25:59.284239 2501 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 03:25:59.476034 kubelet[2501]: E0430 03:25:59.475988 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:25:59.478102 containerd[1466]: time="2025-04-30T03:25:59.478030940Z" level=info msg="CreateContainer within sandbox \"4ec47d8526522998e7460e757e5a1232d51f75f09532069ff726e2986ef743ee\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 03:25:59.668602 containerd[1466]: time="2025-04-30T03:25:59.668376783Z" level=info msg="CreateContainer within sandbox \"4ec47d8526522998e7460e757e5a1232d51f75f09532069ff726e2986ef743ee\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6c8cbe1db68fe09c2be38032efe086f52f3cb2e2d96cbf0faf3ddb9408a2b2c7\"" Apr 30 03:25:59.669586 containerd[1466]: time="2025-04-30T03:25:59.669520168Z" level=info msg="StartContainer for \"6c8cbe1db68fe09c2be38032efe086f52f3cb2e2d96cbf0faf3ddb9408a2b2c7\"" Apr 30 03:25:59.710162 systemd[1]: Started cri-containerd-6c8cbe1db68fe09c2be38032efe086f52f3cb2e2d96cbf0faf3ddb9408a2b2c7.scope - libcontainer container 6c8cbe1db68fe09c2be38032efe086f52f3cb2e2d96cbf0faf3ddb9408a2b2c7. Apr 30 03:25:59.781001 systemd[1]: cri-containerd-6c8cbe1db68fe09c2be38032efe086f52f3cb2e2d96cbf0faf3ddb9408a2b2c7.scope: Deactivated successfully. Apr 30 03:25:59.847002 containerd[1466]: time="2025-04-30T03:25:59.846817063Z" level=info msg="StartContainer for \"6c8cbe1db68fe09c2be38032efe086f52f3cb2e2d96cbf0faf3ddb9408a2b2c7\" returns successfully" Apr 30 03:25:59.924081 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c8cbe1db68fe09c2be38032efe086f52f3cb2e2d96cbf0faf3ddb9408a2b2c7-rootfs.mount: Deactivated successfully. Apr 30 03:26:00.004856 containerd[1466]: time="2025-04-30T03:26:00.004766353Z" level=info msg="shim disconnected" id=6c8cbe1db68fe09c2be38032efe086f52f3cb2e2d96cbf0faf3ddb9408a2b2c7 namespace=k8s.io Apr 30 03:26:00.004856 containerd[1466]: time="2025-04-30T03:26:00.004831709Z" level=warning msg="cleaning up after shim disconnected" id=6c8cbe1db68fe09c2be38032efe086f52f3cb2e2d96cbf0faf3ddb9408a2b2c7 namespace=k8s.io Apr 30 03:26:00.004856 containerd[1466]: time="2025-04-30T03:26:00.004842158Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:26:00.479991 kubelet[2501]: E0430 03:26:00.479940 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:26:00.481654 containerd[1466]: time="2025-04-30T03:26:00.481586977Z" level=info msg="CreateContainer within sandbox \"4ec47d8526522998e7460e757e5a1232d51f75f09532069ff726e2986ef743ee\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 03:26:00.614256 containerd[1466]: time="2025-04-30T03:26:00.614170420Z" level=info msg="CreateContainer within sandbox \"4ec47d8526522998e7460e757e5a1232d51f75f09532069ff726e2986ef743ee\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d5d804e20049a6248c2e9b850410a4f9446d4409c8b21ee67f2ced855e85058c\"" Apr 30 03:26:00.614890 containerd[1466]: time="2025-04-30T03:26:00.614845002Z" level=info msg="StartContainer for \"d5d804e20049a6248c2e9b850410a4f9446d4409c8b21ee67f2ced855e85058c\"" Apr 30 03:26:00.653144 systemd[1]: Started cri-containerd-d5d804e20049a6248c2e9b850410a4f9446d4409c8b21ee67f2ced855e85058c.scope - libcontainer container d5d804e20049a6248c2e9b850410a4f9446d4409c8b21ee67f2ced855e85058c. Apr 30 03:26:00.687209 systemd[1]: cri-containerd-d5d804e20049a6248c2e9b850410a4f9446d4409c8b21ee67f2ced855e85058c.scope: Deactivated successfully. Apr 30 03:26:00.736653 containerd[1466]: time="2025-04-30T03:26:00.736485622Z" level=info msg="StartContainer for \"d5d804e20049a6248c2e9b850410a4f9446d4409c8b21ee67f2ced855e85058c\" returns successfully" Apr 30 03:26:00.761984 containerd[1466]: time="2025-04-30T03:26:00.761896987Z" level=info msg="shim disconnected" id=d5d804e20049a6248c2e9b850410a4f9446d4409c8b21ee67f2ced855e85058c namespace=k8s.io Apr 30 03:26:00.761984 containerd[1466]: time="2025-04-30T03:26:00.761973984Z" level=warning msg="cleaning up after shim disconnected" id=d5d804e20049a6248c2e9b850410a4f9446d4409c8b21ee67f2ced855e85058c namespace=k8s.io Apr 30 03:26:00.761984 containerd[1466]: time="2025-04-30T03:26:00.761985295Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:26:00.924716 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5d804e20049a6248c2e9b850410a4f9446d4409c8b21ee67f2ced855e85058c-rootfs.mount: Deactivated successfully. Apr 30 03:26:01.067178 kubelet[2501]: I0430 03:26:01.067084 2501 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-30T03:26:01Z","lastTransitionTime":"2025-04-30T03:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 30 03:26:01.483943 kubelet[2501]: E0430 03:26:01.483780 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:26:01.485503 containerd[1466]: time="2025-04-30T03:26:01.485357853Z" level=info msg="CreateContainer within sandbox \"4ec47d8526522998e7460e757e5a1232d51f75f09532069ff726e2986ef743ee\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 03:26:01.546072 containerd[1466]: time="2025-04-30T03:26:01.546021116Z" level=info msg="CreateContainer within sandbox \"4ec47d8526522998e7460e757e5a1232d51f75f09532069ff726e2986ef743ee\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3e32afb535a2bf3c7dea172a0662c2596856d677be45fd618835b15e0a7735e1\"" Apr 30 03:26:01.547214 containerd[1466]: time="2025-04-30T03:26:01.547146432Z" level=info msg="StartContainer for \"3e32afb535a2bf3c7dea172a0662c2596856d677be45fd618835b15e0a7735e1\"" Apr 30 03:26:01.582203 systemd[1]: Started cri-containerd-3e32afb535a2bf3c7dea172a0662c2596856d677be45fd618835b15e0a7735e1.scope - libcontainer container 3e32afb535a2bf3c7dea172a0662c2596856d677be45fd618835b15e0a7735e1. Apr 30 03:26:01.647806 containerd[1466]: time="2025-04-30T03:26:01.647742129Z" level=info msg="StartContainer for \"3e32afb535a2bf3c7dea172a0662c2596856d677be45fd618835b15e0a7735e1\" returns successfully" Apr 30 03:26:02.077005 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 30 03:26:02.489151 kubelet[2501]: E0430 03:26:02.488831 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:26:02.502852 kubelet[2501]: I0430 03:26:02.502762 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dxdtl" podStartSLOduration=6.502744294 podStartE2EDuration="6.502744294s" podCreationTimestamp="2025-04-30 03:25:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:26:02.501992292 +0000 UTC m=+103.371817246" watchObservedRunningTime="2025-04-30 03:26:02.502744294 +0000 UTC m=+103.372569228" Apr 30 03:26:03.490294 kubelet[2501]: E0430 03:26:03.490255 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:26:04.492631 kubelet[2501]: E0430 03:26:04.492584 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:26:05.370477 kubelet[2501]: E0430 03:26:05.370296 2501 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:32818->127.0.0.1:36969: write tcp 127.0.0.1:32818->127.0.0.1:36969: write: broken pipe Apr 30 03:26:05.408350 systemd-networkd[1380]: lxc_health: Link UP Apr 30 03:26:05.421193 systemd-networkd[1380]: lxc_health: Gained carrier Apr 30 03:26:06.969435 systemd-networkd[1380]: lxc_health: Gained IPv6LL Apr 30 03:26:07.055952 kubelet[2501]: E0430 03:26:07.055611 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:26:07.499353 kubelet[2501]: E0430 03:26:07.499287 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:26:08.501933 kubelet[2501]: E0430 03:26:08.501870 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 03:26:12.239172 sshd[4386]: pam_unix(sshd:session): session closed for user core Apr 30 03:26:12.244546 systemd[1]: sshd@29-10.0.0.59:22-10.0.0.1:37138.service: Deactivated successfully. Apr 30 03:26:12.247218 systemd[1]: session-30.scope: Deactivated successfully. Apr 30 03:26:12.247941 systemd-logind[1446]: Session 30 logged out. Waiting for processes to exit. Apr 30 03:26:12.248878 systemd-logind[1446]: Removed session 30.