Apr 30 00:20:17.909734 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 22:31:30 -00 2025 Apr 30 00:20:17.909762 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=079594ab73b0b9c3f57b251ae4a9c4ba48b1d8cf52fcc550cc89261eb22129fc Apr 30 00:20:17.909777 kernel: BIOS-provided physical RAM map: Apr 30 00:20:17.909786 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 30 00:20:17.909795 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 30 00:20:17.909803 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 30 00:20:17.909814 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 30 00:20:17.909823 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 30 00:20:17.909832 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 30 00:20:17.909844 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 30 00:20:17.909853 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 30 00:20:17.909862 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 30 00:20:17.909876 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 30 00:20:17.909885 kernel: NX (Execute Disable) protection: active Apr 30 00:20:17.909896 kernel: APIC: Static calls initialized Apr 30 00:20:17.909914 kernel: SMBIOS 2.8 present. Apr 30 00:20:17.909924 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 30 00:20:17.909934 kernel: Hypervisor detected: KVM Apr 30 00:20:17.909943 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 00:20:17.909953 kernel: kvm-clock: using sched offset of 3092825211 cycles Apr 30 00:20:17.909964 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 00:20:17.909974 kernel: tsc: Detected 2794.748 MHz processor Apr 30 00:20:17.909984 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 00:20:17.909994 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 00:20:17.910007 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Apr 30 00:20:17.910038 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 30 00:20:17.910049 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 00:20:17.910059 kernel: Using GB pages for direct mapping Apr 30 00:20:17.910069 kernel: ACPI: Early table checksum verification disabled Apr 30 00:20:17.910079 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 30 00:20:17.910088 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:20:17.910098 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:20:17.910108 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:20:17.910122 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 30 00:20:17.910133 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:20:17.910142 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:20:17.910152 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:20:17.910163 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:20:17.910173 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Apr 30 00:20:17.910183 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Apr 30 00:20:17.910198 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 30 00:20:17.910212 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Apr 30 00:20:17.910222 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Apr 30 00:20:17.910243 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Apr 30 00:20:17.910253 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Apr 30 00:20:17.910266 kernel: No NUMA configuration found Apr 30 00:20:17.910277 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 30 00:20:17.910290 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 30 00:20:17.910301 kernel: Zone ranges: Apr 30 00:20:17.910312 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 00:20:17.910322 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 30 00:20:17.910332 kernel: Normal empty Apr 30 00:20:17.910342 kernel: Movable zone start for each node Apr 30 00:20:17.910351 kernel: Early memory node ranges Apr 30 00:20:17.910362 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 30 00:20:17.910371 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 30 00:20:17.910382 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 30 00:20:17.910396 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 00:20:17.910410 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 30 00:20:17.910421 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 30 00:20:17.910432 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 30 00:20:17.910442 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 00:20:17.910453 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 30 00:20:17.910463 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 00:20:17.910474 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 00:20:17.910484 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 00:20:17.910498 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 00:20:17.910509 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 00:20:17.910519 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 00:20:17.910530 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 00:20:17.910541 kernel: TSC deadline timer available Apr 30 00:20:17.910551 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 30 00:20:17.910562 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 00:20:17.910573 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 30 00:20:17.910586 kernel: kvm-guest: setup PV sched yield Apr 30 00:20:17.910600 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 30 00:20:17.910610 kernel: Booting paravirtualized kernel on KVM Apr 30 00:20:17.910621 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 00:20:17.910632 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 30 00:20:17.910642 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Apr 30 00:20:17.910653 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Apr 30 00:20:17.910664 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 30 00:20:17.910674 kernel: kvm-guest: PV spinlocks enabled Apr 30 00:20:17.910685 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 00:20:17.910700 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=079594ab73b0b9c3f57b251ae4a9c4ba48b1d8cf52fcc550cc89261eb22129fc Apr 30 00:20:17.910711 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 00:20:17.910722 kernel: random: crng init done Apr 30 00:20:17.910732 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 00:20:17.910742 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 00:20:17.910753 kernel: Fallback order for Node 0: 0 Apr 30 00:20:17.910764 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 30 00:20:17.910774 kernel: Policy zone: DMA32 Apr 30 00:20:17.910788 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 00:20:17.910799 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42992K init, 2200K bss, 136900K reserved, 0K cma-reserved) Apr 30 00:20:17.910809 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 30 00:20:17.910820 kernel: ftrace: allocating 37946 entries in 149 pages Apr 30 00:20:17.910830 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 00:20:17.910840 kernel: Dynamic Preempt: voluntary Apr 30 00:20:17.910851 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 00:20:17.910862 kernel: rcu: RCU event tracing is enabled. Apr 30 00:20:17.910873 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 30 00:20:17.910887 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 00:20:17.910898 kernel: Rude variant of Tasks RCU enabled. Apr 30 00:20:17.910908 kernel: Tracing variant of Tasks RCU enabled. Apr 30 00:20:17.910918 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 00:20:17.910932 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 30 00:20:17.910943 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 30 00:20:17.910953 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 00:20:17.910963 kernel: Console: colour VGA+ 80x25 Apr 30 00:20:17.910974 kernel: printk: console [ttyS0] enabled Apr 30 00:20:17.910988 kernel: ACPI: Core revision 20230628 Apr 30 00:20:17.910998 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 30 00:20:17.911009 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 00:20:17.911034 kernel: x2apic enabled Apr 30 00:20:17.911045 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 00:20:17.911056 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 30 00:20:17.911067 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 30 00:20:17.911077 kernel: kvm-guest: setup PV IPIs Apr 30 00:20:17.911106 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 30 00:20:17.911118 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 30 00:20:17.911129 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Apr 30 00:20:17.911140 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 30 00:20:17.911154 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 30 00:20:17.911165 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 30 00:20:17.911176 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 00:20:17.911187 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 00:20:17.911198 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 00:20:17.911213 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 00:20:17.911224 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Apr 30 00:20:17.911248 kernel: RETBleed: Mitigation: untrained return thunk Apr 30 00:20:17.911259 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 00:20:17.911270 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 00:20:17.911281 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 30 00:20:17.911293 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 30 00:20:17.911305 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 30 00:20:17.911319 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 00:20:17.911330 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 00:20:17.911341 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 00:20:17.911353 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 00:20:17.911364 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Apr 30 00:20:17.911375 kernel: Freeing SMP alternatives memory: 32K Apr 30 00:20:17.911386 kernel: pid_max: default: 32768 minimum: 301 Apr 30 00:20:17.911397 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 00:20:17.911408 kernel: landlock: Up and running. Apr 30 00:20:17.911422 kernel: SELinux: Initializing. Apr 30 00:20:17.911434 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:20:17.911445 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:20:17.911456 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Apr 30 00:20:17.911468 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 00:20:17.911479 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 00:20:17.911490 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 00:20:17.911501 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 30 00:20:17.911515 kernel: ... version: 0 Apr 30 00:20:17.911530 kernel: ... bit width: 48 Apr 30 00:20:17.911541 kernel: ... generic registers: 6 Apr 30 00:20:17.911552 kernel: ... value mask: 0000ffffffffffff Apr 30 00:20:17.911563 kernel: ... max period: 00007fffffffffff Apr 30 00:20:17.911575 kernel: ... fixed-purpose events: 0 Apr 30 00:20:17.911586 kernel: ... event mask: 000000000000003f Apr 30 00:20:17.911597 kernel: signal: max sigframe size: 1776 Apr 30 00:20:17.911608 kernel: rcu: Hierarchical SRCU implementation. Apr 30 00:20:17.911619 kernel: rcu: Max phase no-delay instances is 400. Apr 30 00:20:17.911634 kernel: smp: Bringing up secondary CPUs ... Apr 30 00:20:17.911645 kernel: smpboot: x86: Booting SMP configuration: Apr 30 00:20:17.911656 kernel: .... node #0, CPUs: #1 #2 #3 Apr 30 00:20:17.911667 kernel: smp: Brought up 1 node, 4 CPUs Apr 30 00:20:17.911678 kernel: smpboot: Max logical packages: 1 Apr 30 00:20:17.911689 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Apr 30 00:20:17.911700 kernel: devtmpfs: initialized Apr 30 00:20:17.911711 kernel: x86/mm: Memory block size: 128MB Apr 30 00:20:17.911722 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 00:20:17.911736 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 30 00:20:17.911747 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 00:20:17.911758 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 00:20:17.911769 kernel: audit: initializing netlink subsys (disabled) Apr 30 00:20:17.911779 kernel: audit: type=2000 audit(1745972417.357:1): state=initialized audit_enabled=0 res=1 Apr 30 00:20:17.911790 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 00:20:17.911801 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 00:20:17.911812 kernel: cpuidle: using governor menu Apr 30 00:20:17.911823 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 00:20:17.911837 kernel: dca service started, version 1.12.1 Apr 30 00:20:17.911848 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 30 00:20:17.911859 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 30 00:20:17.911870 kernel: PCI: Using configuration type 1 for base access Apr 30 00:20:17.911881 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 00:20:17.911892 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 00:20:17.911903 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 00:20:17.911914 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 00:20:17.911925 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 00:20:17.911939 kernel: ACPI: Added _OSI(Module Device) Apr 30 00:20:17.911950 kernel: ACPI: Added _OSI(Processor Device) Apr 30 00:20:17.911961 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 00:20:17.911972 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 00:20:17.911983 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 00:20:17.911994 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 00:20:17.912004 kernel: ACPI: Interpreter enabled Apr 30 00:20:17.912028 kernel: ACPI: PM: (supports S0 S3 S5) Apr 30 00:20:17.912041 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 00:20:17.912056 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 00:20:17.912067 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 00:20:17.912078 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 30 00:20:17.912089 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 00:20:17.912367 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 00:20:17.912544 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 30 00:20:17.912713 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 30 00:20:17.912735 kernel: PCI host bridge to bus 0000:00 Apr 30 00:20:17.912920 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 00:20:17.913096 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 00:20:17.913262 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 00:20:17.913416 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 30 00:20:17.913567 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 30 00:20:17.913716 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 30 00:20:17.913873 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 00:20:17.914196 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 30 00:20:17.914400 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 30 00:20:17.914567 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 30 00:20:17.914728 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 30 00:20:17.914889 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 30 00:20:17.915071 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 00:20:17.915275 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 30 00:20:17.915440 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 30 00:20:17.915606 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 30 00:20:17.915784 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 30 00:20:17.915970 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 30 00:20:17.916149 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 30 00:20:17.916319 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 30 00:20:17.916487 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 30 00:20:17.916661 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 30 00:20:17.916818 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 30 00:20:17.916973 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 30 00:20:17.917155 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 30 00:20:17.917322 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 30 00:20:17.917498 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 30 00:20:17.917662 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 30 00:20:17.917836 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 30 00:20:17.917992 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 30 00:20:17.918166 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 30 00:20:17.918345 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 30 00:20:17.918475 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 30 00:20:17.918491 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 00:20:17.918499 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 00:20:17.918507 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 00:20:17.918514 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 00:20:17.918522 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 30 00:20:17.918529 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 30 00:20:17.918537 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 30 00:20:17.918544 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 30 00:20:17.918552 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 30 00:20:17.918562 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 30 00:20:17.918570 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 30 00:20:17.918577 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 30 00:20:17.918585 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 30 00:20:17.918593 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 30 00:20:17.918600 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 30 00:20:17.918607 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 30 00:20:17.918615 kernel: iommu: Default domain type: Translated Apr 30 00:20:17.918623 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 00:20:17.918633 kernel: PCI: Using ACPI for IRQ routing Apr 30 00:20:17.918640 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 00:20:17.918648 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 30 00:20:17.918655 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 30 00:20:17.918783 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 30 00:20:17.918910 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 30 00:20:17.919085 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 00:20:17.919096 kernel: vgaarb: loaded Apr 30 00:20:17.919108 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 30 00:20:17.919116 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 30 00:20:17.919124 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 00:20:17.919131 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 00:20:17.919139 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 00:20:17.919147 kernel: pnp: PnP ACPI init Apr 30 00:20:17.919341 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 30 00:20:17.919358 kernel: pnp: PnP ACPI: found 6 devices Apr 30 00:20:17.919371 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 00:20:17.919379 kernel: NET: Registered PF_INET protocol family Apr 30 00:20:17.919386 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 00:20:17.919394 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 00:20:17.919402 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 00:20:17.919409 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 00:20:17.919417 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 00:20:17.919424 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 00:20:17.919432 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:20:17.919442 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:20:17.919451 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 00:20:17.919459 kernel: NET: Registered PF_XDP protocol family Apr 30 00:20:17.919590 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 00:20:17.919707 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 00:20:17.919821 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 00:20:17.919936 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 30 00:20:17.920067 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 30 00:20:17.920216 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 30 00:20:17.920242 kernel: PCI: CLS 0 bytes, default 64 Apr 30 00:20:17.920251 kernel: Initialise system trusted keyrings Apr 30 00:20:17.920259 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 00:20:17.920267 kernel: Key type asymmetric registered Apr 30 00:20:17.920274 kernel: Asymmetric key parser 'x509' registered Apr 30 00:20:17.920282 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 00:20:17.920289 kernel: io scheduler mq-deadline registered Apr 30 00:20:17.920297 kernel: io scheduler kyber registered Apr 30 00:20:17.920304 kernel: io scheduler bfq registered Apr 30 00:20:17.920315 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 00:20:17.920323 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 30 00:20:17.920331 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 30 00:20:17.920339 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 30 00:20:17.920346 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 00:20:17.920354 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 00:20:17.920362 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 00:20:17.920369 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 00:20:17.920377 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 00:20:17.920534 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 30 00:20:17.920546 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 00:20:17.920664 kernel: rtc_cmos 00:04: registered as rtc0 Apr 30 00:20:17.920783 kernel: rtc_cmos 00:04: setting system clock to 2025-04-30T00:20:17 UTC (1745972417) Apr 30 00:20:17.920904 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 30 00:20:17.920914 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 30 00:20:17.920922 kernel: NET: Registered PF_INET6 protocol family Apr 30 00:20:17.920933 kernel: Segment Routing with IPv6 Apr 30 00:20:17.920941 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 00:20:17.920949 kernel: NET: Registered PF_PACKET protocol family Apr 30 00:20:17.920956 kernel: Key type dns_resolver registered Apr 30 00:20:17.920964 kernel: IPI shorthand broadcast: enabled Apr 30 00:20:17.920971 kernel: sched_clock: Marking stable (763002223, 108678258)->(933479287, -61798806) Apr 30 00:20:17.920979 kernel: registered taskstats version 1 Apr 30 00:20:17.920987 kernel: Loading compiled-in X.509 certificates Apr 30 00:20:17.920994 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: eb8928891d93dabd1aa89590482110d196038597' Apr 30 00:20:17.921005 kernel: Key type .fscrypt registered Apr 30 00:20:17.921012 kernel: Key type fscrypt-provisioning registered Apr 30 00:20:17.921091 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 00:20:17.921099 kernel: ima: Allocated hash algorithm: sha1 Apr 30 00:20:17.921107 kernel: ima: No architecture policies found Apr 30 00:20:17.921114 kernel: clk: Disabling unused clocks Apr 30 00:20:17.921122 kernel: Freeing unused kernel image (initmem) memory: 42992K Apr 30 00:20:17.921130 kernel: Write protecting the kernel read-only data: 36864k Apr 30 00:20:17.921137 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Apr 30 00:20:17.921148 kernel: Run /init as init process Apr 30 00:20:17.921156 kernel: with arguments: Apr 30 00:20:17.921163 kernel: /init Apr 30 00:20:17.921170 kernel: with environment: Apr 30 00:20:17.921178 kernel: HOME=/ Apr 30 00:20:17.921185 kernel: TERM=linux Apr 30 00:20:17.921193 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 00:20:17.921202 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:20:17.921214 systemd[1]: Detected virtualization kvm. Apr 30 00:20:17.921223 systemd[1]: Detected architecture x86-64. Apr 30 00:20:17.921239 systemd[1]: Running in initrd. Apr 30 00:20:17.921247 systemd[1]: No hostname configured, using default hostname. Apr 30 00:20:17.921255 systemd[1]: Hostname set to . Apr 30 00:20:17.921263 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:20:17.921271 systemd[1]: Queued start job for default target initrd.target. Apr 30 00:20:17.921279 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:20:17.921290 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:20:17.921299 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 00:20:17.921320 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:20:17.921331 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 00:20:17.921340 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 00:20:17.921352 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 00:20:17.921361 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 00:20:17.921369 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:20:17.921378 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:20:17.921386 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:20:17.921394 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:20:17.921402 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:20:17.921410 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:20:17.921421 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:20:17.921430 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:20:17.921438 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:20:17.921446 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:20:17.921455 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:20:17.921463 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:20:17.921471 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:20:17.921480 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:20:17.921488 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 00:20:17.921499 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:20:17.921507 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 00:20:17.921515 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 00:20:17.921524 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:20:17.921532 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:20:17.921540 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:20:17.921548 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 00:20:17.921557 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:20:17.921568 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 00:20:17.921576 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:20:17.921585 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:20:17.921615 systemd-journald[192]: Collecting audit messages is disabled. Apr 30 00:20:17.921635 systemd-journald[192]: Journal started Apr 30 00:20:17.921656 systemd-journald[192]: Runtime Journal (/run/log/journal/924cfad8298e4d32b5a947bdcbbfbf12) is 6.0M, max 48.4M, 42.3M free. Apr 30 00:20:17.903568 systemd-modules-load[195]: Inserted module 'overlay' Apr 30 00:20:17.946670 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 00:20:17.946689 kernel: Bridge firewalling registered Apr 30 00:20:17.946700 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:20:17.933998 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 30 00:20:17.942943 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:20:17.943558 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:20:17.953316 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:20:17.955614 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:20:17.956857 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:20:17.961211 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:20:17.978743 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:20:17.979586 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:20:17.983133 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:20:17.989190 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 00:20:17.995371 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:20:17.997366 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:20:18.008436 dracut-cmdline[226]: dracut-dracut-053 Apr 30 00:20:18.013324 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=079594ab73b0b9c3f57b251ae4a9c4ba48b1d8cf52fcc550cc89261eb22129fc Apr 30 00:20:18.037531 systemd-resolved[230]: Positive Trust Anchors: Apr 30 00:20:18.037553 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:20:18.037589 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:20:18.040370 systemd-resolved[230]: Defaulting to hostname 'linux'. Apr 30 00:20:18.041649 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:20:18.048479 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:20:18.144056 kernel: SCSI subsystem initialized Apr 30 00:20:18.154058 kernel: Loading iSCSI transport class v2.0-870. Apr 30 00:20:18.169070 kernel: iscsi: registered transport (tcp) Apr 30 00:20:18.198089 kernel: iscsi: registered transport (qla4xxx) Apr 30 00:20:18.198214 kernel: QLogic iSCSI HBA Driver Apr 30 00:20:18.257592 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 00:20:18.271229 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 00:20:18.303169 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 00:20:18.303251 kernel: device-mapper: uevent: version 1.0.3 Apr 30 00:20:18.304401 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 00:20:18.352081 kernel: raid6: avx2x4 gen() 18856 MB/s Apr 30 00:20:18.369076 kernel: raid6: avx2x2 gen() 18068 MB/s Apr 30 00:20:18.386377 kernel: raid6: avx2x1 gen() 18212 MB/s Apr 30 00:20:18.386469 kernel: raid6: using algorithm avx2x4 gen() 18856 MB/s Apr 30 00:20:18.404437 kernel: raid6: .... xor() 6299 MB/s, rmw enabled Apr 30 00:20:18.404536 kernel: raid6: using avx2x2 recovery algorithm Apr 30 00:20:18.427079 kernel: xor: automatically using best checksumming function avx Apr 30 00:20:18.613081 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 00:20:18.628466 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:20:18.639319 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:20:18.655237 systemd-udevd[413]: Using default interface naming scheme 'v255'. Apr 30 00:20:18.661359 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:20:18.669184 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 00:20:18.686643 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Apr 30 00:20:18.728914 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:20:18.737301 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:20:18.805860 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:20:18.814211 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 00:20:18.827669 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 00:20:18.831453 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:20:18.832975 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:20:18.834474 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:20:18.846185 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 00:20:18.854071 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 30 00:20:18.898516 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 00:20:18.898542 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 30 00:20:18.898729 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 00:20:18.898754 kernel: GPT:9289727 != 19775487 Apr 30 00:20:18.898768 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 00:20:18.898782 kernel: GPT:9289727 != 19775487 Apr 30 00:20:18.898795 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 00:20:18.898809 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:20:18.898824 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 00:20:18.898838 kernel: AES CTR mode by8 optimization enabled Apr 30 00:20:18.898852 kernel: libata version 3.00 loaded. Apr 30 00:20:18.857221 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:20:18.876666 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:20:18.902379 kernel: ahci 0000:00:1f.2: version 3.0 Apr 30 00:20:18.928094 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 30 00:20:18.928135 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 30 00:20:18.928392 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 30 00:20:18.928602 kernel: scsi host0: ahci Apr 30 00:20:18.928830 kernel: scsi host1: ahci Apr 30 00:20:18.929101 kernel: scsi host2: ahci Apr 30 00:20:18.929329 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (471) Apr 30 00:20:18.929356 kernel: scsi host3: ahci Apr 30 00:20:18.929572 kernel: scsi host4: ahci Apr 30 00:20:18.929794 kernel: scsi host5: ahci Apr 30 00:20:18.930015 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 30 00:20:18.930052 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 30 00:20:18.930069 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 30 00:20:18.930084 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 30 00:20:18.930098 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 30 00:20:18.930119 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 30 00:20:18.876794 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:20:18.974797 kernel: BTRFS: device fsid 4a916ed5-00fd-4e52-b8e2-9fed6d007e9f devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (465) Apr 30 00:20:18.878874 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:20:18.880380 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:20:18.880585 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:20:18.886118 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:20:18.894535 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:20:18.933316 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 30 00:20:18.978576 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:20:18.987866 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 30 00:20:19.005138 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 30 00:20:19.006737 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 30 00:20:19.016919 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 00:20:19.036349 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 00:20:19.038901 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:20:19.048582 disk-uuid[556]: Primary Header is updated. Apr 30 00:20:19.048582 disk-uuid[556]: Secondary Entries is updated. Apr 30 00:20:19.048582 disk-uuid[556]: Secondary Header is updated. Apr 30 00:20:19.052649 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:20:19.058091 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:20:19.066477 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:20:19.239393 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 30 00:20:19.239503 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 30 00:20:19.239520 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 30 00:20:19.241046 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 30 00:20:19.242047 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 30 00:20:19.242074 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 30 00:20:19.243061 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 30 00:20:19.244366 kernel: ata3.00: applying bridge limits Apr 30 00:20:19.244381 kernel: ata3.00: configured for UDMA/100 Apr 30 00:20:19.245043 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 30 00:20:19.294050 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 30 00:20:19.311770 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 00:20:19.311790 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 30 00:20:20.079062 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:20:20.079507 disk-uuid[557]: The operation has completed successfully. Apr 30 00:20:20.110628 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 00:20:20.110783 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 00:20:20.137325 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 00:20:20.142409 sh[592]: Success Apr 30 00:20:20.156094 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 30 00:20:20.193845 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 00:20:20.205778 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 00:20:20.209158 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 00:20:20.224650 kernel: BTRFS info (device dm-0): first mount of filesystem 4a916ed5-00fd-4e52-b8e2-9fed6d007e9f Apr 30 00:20:20.224685 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 00:20:20.224700 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 00:20:20.225755 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 00:20:20.227281 kernel: BTRFS info (device dm-0): using free space tree Apr 30 00:20:20.232235 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 00:20:20.233279 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 00:20:20.244334 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 00:20:20.246495 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 00:20:20.256483 kernel: BTRFS info (device vda6): first mount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:20:20.256526 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 00:20:20.256544 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:20:20.260057 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:20:20.270739 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 00:20:20.272859 kernel: BTRFS info (device vda6): last unmount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:20:20.283878 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 00:20:20.292201 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 00:20:20.446859 ignition[682]: Ignition 2.20.0 Apr 30 00:20:20.446870 ignition[682]: Stage: fetch-offline Apr 30 00:20:20.446911 ignition[682]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:20:20.446921 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:20:20.447053 ignition[682]: parsed url from cmdline: "" Apr 30 00:20:20.447057 ignition[682]: no config URL provided Apr 30 00:20:20.447062 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:20:20.447073 ignition[682]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:20:20.447103 ignition[682]: op(1): [started] loading QEMU firmware config module Apr 30 00:20:20.447108 ignition[682]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 30 00:20:20.462874 ignition[682]: op(1): [finished] loading QEMU firmware config module Apr 30 00:20:20.464597 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:20:20.475207 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:20:20.502532 systemd-networkd[780]: lo: Link UP Apr 30 00:20:20.502543 systemd-networkd[780]: lo: Gained carrier Apr 30 00:20:20.504307 systemd-networkd[780]: Enumeration completed Apr 30 00:20:20.504705 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:20:20.504709 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:20:20.504921 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:20:20.506646 systemd-networkd[780]: eth0: Link UP Apr 30 00:20:20.506651 systemd-networkd[780]: eth0: Gained carrier Apr 30 00:20:20.506660 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:20:20.506986 systemd[1]: Reached target network.target - Network. Apr 30 00:20:20.525199 ignition[682]: parsing config with SHA512: 6bfbe2f1e07bfed33dc14055e1611e5586e5d5138579a3d18f8b81e3f4dc853780611efa5e837e7b0d23528f7413eff89d922c438fcd8a7e484f33b4ba663220 Apr 30 00:20:20.526103 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.119/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:20:20.531621 unknown[682]: fetched base config from "system" Apr 30 00:20:20.531632 unknown[682]: fetched user config from "qemu" Apr 30 00:20:20.532068 ignition[682]: fetch-offline: fetch-offline passed Apr 30 00:20:20.532149 ignition[682]: Ignition finished successfully Apr 30 00:20:20.537434 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:20:20.538073 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 30 00:20:20.550170 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 00:20:20.565073 ignition[784]: Ignition 2.20.0 Apr 30 00:20:20.565086 ignition[784]: Stage: kargs Apr 30 00:20:20.565293 ignition[784]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:20:20.565308 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:20:20.569549 ignition[784]: kargs: kargs passed Apr 30 00:20:20.569610 ignition[784]: Ignition finished successfully Apr 30 00:20:20.574003 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 00:20:20.590174 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 00:20:20.628561 ignition[792]: Ignition 2.20.0 Apr 30 00:20:20.628574 ignition[792]: Stage: disks Apr 30 00:20:20.628769 ignition[792]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:20:20.628784 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:20:20.633001 ignition[792]: disks: disks passed Apr 30 00:20:20.633081 ignition[792]: Ignition finished successfully Apr 30 00:20:20.636656 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 00:20:20.637513 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 00:20:20.639356 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:20:20.641991 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:20:20.644485 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:20:20.646579 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:20:20.660202 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 00:20:20.713722 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 00:20:20.723358 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 00:20:20.742156 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 00:20:20.842052 kernel: EXT4-fs (vda9): mounted filesystem 21480c83-ef05-4682-ad3b-f751980943a0 r/w with ordered data mode. Quota mode: none. Apr 30 00:20:20.842666 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 00:20:20.843893 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 00:20:20.852124 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:20:20.854262 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 00:20:20.854975 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 00:20:20.855014 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 00:20:20.863666 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (811) Apr 30 00:20:20.855055 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:20:20.867914 kernel: BTRFS info (device vda6): first mount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:20:20.867935 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 00:20:20.867948 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:20:20.870054 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:20:20.871924 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:20:20.895153 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 00:20:20.897289 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 00:20:20.937302 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 00:20:20.942944 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Apr 30 00:20:20.948766 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 00:20:20.953089 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 00:20:21.045451 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 00:20:21.054165 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 00:20:21.055392 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 00:20:21.066052 kernel: BTRFS info (device vda6): last unmount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:20:21.081828 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 00:20:21.163337 ignition[926]: INFO : Ignition 2.20.0 Apr 30 00:20:21.163337 ignition[926]: INFO : Stage: mount Apr 30 00:20:21.165607 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:20:21.165607 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:20:21.165607 ignition[926]: INFO : mount: mount passed Apr 30 00:20:21.165607 ignition[926]: INFO : Ignition finished successfully Apr 30 00:20:21.166869 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 00:20:21.178116 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 00:20:21.223690 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 00:20:21.241178 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:20:21.248041 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (939) Apr 30 00:20:21.248069 kernel: BTRFS info (device vda6): first mount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:20:21.249398 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 00:20:21.249417 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:20:21.258056 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:20:21.259529 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:20:21.296006 ignition[956]: INFO : Ignition 2.20.0 Apr 30 00:20:21.296006 ignition[956]: INFO : Stage: files Apr 30 00:20:21.297966 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:20:21.297966 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:20:21.297966 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Apr 30 00:20:21.297966 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 00:20:21.297966 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 00:20:21.305727 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 00:20:21.307182 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 00:20:21.308767 unknown[956]: wrote ssh authorized keys file for user: core Apr 30 00:20:21.310044 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 00:20:21.311493 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 00:20:21.311493 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 00:20:21.353574 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 00:20:21.468373 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 00:20:21.468373 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 00:20:21.474818 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 30 00:20:21.624289 systemd-networkd[780]: eth0: Gained IPv6LL Apr 30 00:20:21.982909 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 00:20:22.114658 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 00:20:22.114658 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 00:20:22.118979 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 00:20:22.118979 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:20:22.118979 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:20:22.118979 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:20:22.118979 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:20:22.118979 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:20:22.118979 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:20:22.118979 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:20:22.118979 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:20:22.118979 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 00:20:22.118979 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 00:20:22.118979 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 00:20:22.118979 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Apr 30 00:20:22.407254 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 00:20:22.889398 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 00:20:22.889398 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 30 00:20:22.893569 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:20:22.918113 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:20:22.918113 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 30 00:20:22.918113 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 30 00:20:22.918113 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 00:20:22.918113 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 00:20:22.918113 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 30 00:20:22.918113 ignition[956]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 30 00:20:23.016295 ignition[956]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 00:20:23.029171 ignition[956]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 00:20:23.030854 ignition[956]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 30 00:20:23.030854 ignition[956]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 30 00:20:23.033684 ignition[956]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 00:20:23.035170 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:20:23.036957 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:20:23.038697 ignition[956]: INFO : files: files passed Apr 30 00:20:23.039456 ignition[956]: INFO : Ignition finished successfully Apr 30 00:20:23.059543 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 00:20:23.079286 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 00:20:23.083203 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 00:20:23.085894 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 00:20:23.087112 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 00:20:23.094175 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Apr 30 00:20:23.098205 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:20:23.098205 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:20:23.101630 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:20:23.103769 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:20:23.104959 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 00:20:23.118225 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 00:20:23.147559 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 00:20:23.147710 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 00:20:23.148678 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 00:20:23.151674 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 00:20:23.153876 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 00:20:23.154995 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 00:20:23.183054 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:20:23.212262 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 00:20:23.222482 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:20:23.222846 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:20:23.225652 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 00:20:23.226003 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 00:20:23.226156 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:20:23.226887 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 00:20:23.227439 systemd[1]: Stopped target basic.target - Basic System. Apr 30 00:20:23.227773 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 00:20:23.228116 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:20:23.238661 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 00:20:23.241047 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 00:20:23.241638 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:20:23.241964 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 00:20:23.242472 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 00:20:23.242785 systemd[1]: Stopped target swap.target - Swaps. Apr 30 00:20:23.243106 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 00:20:23.243249 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:20:23.243974 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:20:23.255163 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:20:23.255727 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 00:20:23.279109 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:20:23.279825 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 00:20:23.279974 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 00:20:23.285582 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 00:20:23.285762 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:20:23.287918 systemd[1]: Stopped target paths.target - Path Units. Apr 30 00:20:23.288376 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 00:20:23.293092 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:20:23.295939 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 00:20:23.298131 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 00:20:23.300219 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 00:20:23.301264 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:20:23.303477 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 00:20:23.304534 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:20:23.306855 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 00:20:23.308245 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:20:23.311080 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 00:20:23.312230 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 00:20:23.326233 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 00:20:23.328324 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 00:20:23.329546 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:20:23.333256 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 00:20:23.335427 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 00:20:23.336763 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:20:23.339712 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 00:20:23.341211 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:20:23.343107 ignition[1012]: INFO : Ignition 2.20.0 Apr 30 00:20:23.343107 ignition[1012]: INFO : Stage: umount Apr 30 00:20:23.343107 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:20:23.343107 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:20:23.349541 ignition[1012]: INFO : umount: umount passed Apr 30 00:20:23.349541 ignition[1012]: INFO : Ignition finished successfully Apr 30 00:20:23.348085 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 00:20:23.348249 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 00:20:23.351937 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 00:20:23.352128 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 00:20:23.354999 systemd[1]: Stopped target network.target - Network. Apr 30 00:20:23.372400 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 00:20:23.372542 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 00:20:23.374742 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 00:20:23.374817 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 00:20:23.377051 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 00:20:23.377131 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 00:20:23.379719 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 00:20:23.379780 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 00:20:23.382248 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 00:20:23.384230 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 00:20:23.387125 systemd-networkd[780]: eth0: DHCPv6 lease lost Apr 30 00:20:23.387395 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 00:20:23.390679 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 00:20:23.390845 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 00:20:23.393577 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 00:20:23.393626 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:20:23.405119 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 00:20:23.406236 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 00:20:23.406297 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:20:23.408898 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:20:23.424847 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 00:20:23.424993 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 00:20:23.431293 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 00:20:23.432569 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:20:23.448366 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 00:20:23.449567 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 00:20:23.452356 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 00:20:23.453633 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 00:20:23.456042 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 00:20:23.456110 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:20:23.458335 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 00:20:23.458395 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:20:23.462338 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 00:20:23.462399 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 00:20:23.465588 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:20:23.465652 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:20:23.498185 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 00:20:23.500879 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:20:23.500957 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:20:23.504549 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 00:20:23.505775 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 00:20:23.508112 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 00:20:23.508175 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:20:23.511792 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 00:20:23.512820 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:20:23.515340 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:20:23.515396 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:20:23.519115 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 00:20:23.520281 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 00:20:23.995647 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 00:20:24.001604 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 00:20:24.003690 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 00:20:24.005826 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 00:20:24.005885 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 00:20:24.017161 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 00:20:24.026228 systemd[1]: Switching root. Apr 30 00:20:24.052563 systemd-journald[192]: Journal stopped Apr 30 00:20:27.645590 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Apr 30 00:20:27.645684 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 00:20:27.645721 kernel: SELinux: policy capability open_perms=1 Apr 30 00:20:27.645739 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 00:20:27.645762 kernel: SELinux: policy capability always_check_network=0 Apr 30 00:20:27.645780 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 00:20:27.645798 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 00:20:27.645815 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 00:20:27.645832 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 00:20:27.645850 kernel: audit: type=1403 audit(1745972425.842:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 00:20:27.645869 systemd[1]: Successfully loaded SELinux policy in 69.071ms. Apr 30 00:20:27.645907 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.987ms. Apr 30 00:20:27.645931 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:20:27.645961 systemd[1]: Detected virtualization kvm. Apr 30 00:20:27.645981 systemd[1]: Detected architecture x86-64. Apr 30 00:20:27.645999 systemd[1]: Detected first boot. Apr 30 00:20:27.646064 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:20:27.646086 zram_generator::config[1057]: No configuration found. Apr 30 00:20:27.646114 systemd[1]: Populated /etc with preset unit settings. Apr 30 00:20:27.646132 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 00:20:27.646150 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 00:20:27.646183 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 00:20:27.646202 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 00:20:27.646220 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 00:20:27.646238 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 00:20:27.646256 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 00:20:27.646275 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 00:20:27.646293 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 00:20:27.646312 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 00:20:27.646335 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 00:20:27.646353 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:20:27.646371 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:20:27.646389 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 00:20:27.646407 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 00:20:27.646425 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 00:20:27.646451 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:20:27.646472 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 00:20:27.646493 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:20:27.646521 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 00:20:27.646545 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 00:20:27.646567 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 00:20:27.646589 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 00:20:27.646621 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:20:27.646645 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:20:27.646668 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:20:27.646690 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:20:27.646715 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 00:20:27.646733 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 00:20:27.646750 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:20:27.646769 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:20:27.646788 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:20:27.646806 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 00:20:27.646825 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 00:20:27.646848 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 00:20:27.646866 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 00:20:27.646889 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:20:27.646907 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 00:20:27.646924 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 00:20:27.646942 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 00:20:27.646973 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 00:20:27.646992 systemd[1]: Reached target machines.target - Containers. Apr 30 00:20:27.647011 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 00:20:27.647049 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:20:27.647082 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:20:27.647100 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 00:20:27.647119 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:20:27.647136 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:20:27.647155 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:20:27.647173 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 00:20:27.647191 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:20:27.647209 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 00:20:27.647228 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 00:20:27.647252 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 00:20:27.647270 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 00:20:27.647287 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 00:20:27.647306 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:20:27.647324 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:20:27.647343 kernel: fuse: init (API version 7.39) Apr 30 00:20:27.647362 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 00:20:27.647380 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 00:20:27.647399 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:20:27.647422 kernel: loop: module loaded Apr 30 00:20:27.647439 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 00:20:27.647456 systemd[1]: Stopped verity-setup.service. Apr 30 00:20:27.647517 systemd-journald[1120]: Collecting audit messages is disabled. Apr 30 00:20:27.647552 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:20:27.647571 systemd-journald[1120]: Journal started Apr 30 00:20:27.647607 systemd-journald[1120]: Runtime Journal (/run/log/journal/924cfad8298e4d32b5a947bdcbbfbf12) is 6.0M, max 48.4M, 42.3M free. Apr 30 00:20:27.032105 systemd[1]: Queued start job for default target multi-user.target. Apr 30 00:20:27.075852 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 30 00:20:27.076580 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 00:20:27.654439 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:20:27.657912 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 00:20:27.674108 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 00:20:27.675880 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 00:20:27.677210 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 00:20:27.678995 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 00:20:27.683226 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 00:20:27.684817 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:20:27.686769 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 00:20:27.687190 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 00:20:27.688990 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:20:27.689479 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:20:27.691519 kernel: ACPI: bus type drm_connector registered Apr 30 00:20:27.691938 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:20:27.692354 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:20:27.694305 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:20:27.694540 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:20:27.696164 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 00:20:27.696384 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 00:20:27.698081 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:20:27.698338 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:20:27.700048 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:20:27.701907 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 00:20:27.703890 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 00:20:27.722816 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 00:20:27.733175 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 00:20:27.736429 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 00:20:27.737740 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 00:20:27.737771 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:20:27.740215 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 00:20:27.743165 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 00:20:27.749837 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 00:20:27.751399 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:20:27.753857 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 00:20:27.761195 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 00:20:27.763076 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:20:27.764474 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 00:20:27.766043 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:20:27.767208 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:20:27.776271 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 00:20:27.779403 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 00:20:27.781898 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 00:20:27.783467 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 00:20:27.785014 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 00:20:27.791219 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:20:27.798916 systemd-journald[1120]: Time spent on flushing to /var/log/journal/924cfad8298e4d32b5a947bdcbbfbf12 is 30.350ms for 957 entries. Apr 30 00:20:27.798916 systemd-journald[1120]: System Journal (/var/log/journal/924cfad8298e4d32b5a947bdcbbfbf12) is 8.0M, max 195.6M, 187.6M free. Apr 30 00:20:27.835631 systemd-journald[1120]: Received client request to flush runtime journal. Apr 30 00:20:27.835675 kernel: loop0: detected capacity change from 0 to 140992 Apr 30 00:20:27.810208 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 00:20:27.820314 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 00:20:27.823363 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 00:20:27.825556 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:20:27.830062 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 00:20:27.841123 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 00:20:27.842260 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 00:20:27.844240 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 00:20:27.849623 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 00:20:27.865472 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 00:20:27.866217 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 00:20:27.870659 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 00:20:27.882986 kernel: loop1: detected capacity change from 0 to 138184 Apr 30 00:20:27.881551 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:20:27.907826 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Apr 30 00:20:27.907849 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Apr 30 00:20:27.914877 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:20:27.926060 kernel: loop2: detected capacity change from 0 to 205544 Apr 30 00:20:27.960060 kernel: loop3: detected capacity change from 0 to 140992 Apr 30 00:20:27.973069 kernel: loop4: detected capacity change from 0 to 138184 Apr 30 00:20:27.987246 kernel: loop5: detected capacity change from 0 to 205544 Apr 30 00:20:27.992917 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 30 00:20:27.993643 (sd-merge)[1195]: Merged extensions into '/usr'. Apr 30 00:20:28.000710 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 00:20:28.000731 systemd[1]: Reloading... Apr 30 00:20:28.102152 zram_generator::config[1221]: No configuration found. Apr 30 00:20:28.410439 ldconfig[1165]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 00:20:28.412835 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:20:28.495743 systemd[1]: Reloading finished in 494 ms. Apr 30 00:20:28.572315 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 00:20:28.584975 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 00:20:28.617835 systemd[1]: Starting ensure-sysext.service... Apr 30 00:20:28.638143 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:20:28.663675 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Apr 30 00:20:28.663718 systemd[1]: Reloading... Apr 30 00:20:28.710705 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 00:20:28.711219 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 00:20:28.713603 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 00:20:28.715109 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Apr 30 00:20:28.715305 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Apr 30 00:20:28.723926 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:20:28.724121 systemd-tmpfiles[1259]: Skipping /boot Apr 30 00:20:28.746698 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:20:28.746719 systemd-tmpfiles[1259]: Skipping /boot Apr 30 00:20:28.785074 zram_generator::config[1289]: No configuration found. Apr 30 00:20:28.963816 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:20:29.037379 systemd[1]: Reloading finished in 373 ms. Apr 30 00:20:29.062053 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:20:29.069713 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 00:20:29.113349 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 00:20:29.140441 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 00:20:29.147923 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 00:20:29.154689 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:20:29.174618 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:20:29.211165 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 00:20:29.261123 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:20:29.261359 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:20:29.265412 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:20:29.281890 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:20:29.293408 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:20:29.293652 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Apr 30 00:20:29.298471 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:20:29.315135 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 00:20:29.318078 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:20:29.319753 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 00:20:29.326546 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:20:29.326922 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:20:29.337006 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:20:29.337295 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:20:29.339779 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:20:29.340082 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:20:29.350415 augenrules[1354]: No rules Apr 30 00:20:29.352104 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:20:29.352408 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 00:20:29.362376 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:20:29.362670 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:20:29.386734 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:20:29.402360 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:20:29.432480 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:20:29.438769 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:20:29.451127 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 00:20:29.469123 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:20:29.470555 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:20:29.475574 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 00:20:29.477712 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 00:20:29.481985 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:20:29.482886 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:20:29.485499 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:20:29.486370 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:20:29.506486 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:20:29.506732 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:20:29.546108 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 00:20:29.585984 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 00:20:29.597063 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1389) Apr 30 00:20:29.598794 systemd[1]: Finished ensure-sysext.service. Apr 30 00:20:29.607540 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 00:20:29.623345 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:20:29.631246 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 00:20:29.642450 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:20:29.650341 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:20:29.653622 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 30 00:20:29.657195 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:20:29.671715 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:20:29.674779 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:20:29.675425 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:20:29.678434 augenrules[1405]: /sbin/augenrules: No change Apr 30 00:20:29.680205 kernel: ACPI: button: Power Button [PWRF] Apr 30 00:20:29.680230 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:20:29.686229 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 00:20:29.686553 systemd-resolved[1328]: Positive Trust Anchors: Apr 30 00:20:29.686578 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:20:29.686623 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:20:29.687741 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:20:29.687787 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:20:29.688510 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:20:29.688752 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:20:29.690566 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:20:29.690794 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:20:29.696509 systemd-resolved[1328]: Defaulting to hostname 'linux'. Apr 30 00:20:29.700986 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:20:29.701866 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:20:29.707097 augenrules[1432]: No rules Apr 30 00:20:29.709205 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:20:29.725090 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:20:29.725377 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 00:20:29.731375 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 30 00:20:29.732091 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 30 00:20:29.732293 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 30 00:20:29.730566 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:20:29.730851 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:20:29.751247 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 30 00:20:29.772056 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 00:20:29.771854 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 00:20:29.786974 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:20:29.824228 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 00:20:29.838464 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:20:29.838666 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:20:29.845850 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:20:29.866047 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 00:20:29.867821 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 00:20:29.887243 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 00:20:29.901068 systemd-networkd[1421]: lo: Link UP Apr 30 00:20:29.901088 systemd-networkd[1421]: lo: Gained carrier Apr 30 00:20:29.904360 systemd-networkd[1421]: Enumeration completed Apr 30 00:20:29.904473 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:20:29.905613 systemd-networkd[1421]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:20:29.905686 systemd-networkd[1421]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:20:29.918283 systemd-networkd[1421]: eth0: Link UP Apr 30 00:20:29.918368 systemd-networkd[1421]: eth0: Gained carrier Apr 30 00:20:29.918447 systemd-networkd[1421]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:20:29.919437 systemd[1]: Reached target network.target - Network. Apr 30 00:20:29.928316 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 00:20:29.937869 kernel: kvm_amd: TSC scaling supported Apr 30 00:20:29.937943 kernel: kvm_amd: Nested Virtualization enabled Apr 30 00:20:29.937957 kernel: kvm_amd: Nested Paging enabled Apr 30 00:20:29.938682 kernel: kvm_amd: LBR virtualization supported Apr 30 00:20:29.939135 systemd-networkd[1421]: eth0: DHCPv4 address 10.0.0.119/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:20:29.939263 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Apr 30 00:20:29.940072 kernel: kvm_amd: Virtual GIF supported Apr 30 00:20:29.942888 systemd-timesyncd[1424]: Network configuration changed, trying to establish connection. Apr 30 00:20:30.603644 systemd-resolved[1328]: Clock change detected. Flushing caches. Apr 30 00:20:30.603988 systemd-timesyncd[1424]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 30 00:20:30.604226 systemd-timesyncd[1424]: Initial clock synchronization to Wed 2025-04-30 00:20:30.603569 UTC. Apr 30 00:20:30.619476 kernel: EDAC MC: Ver: 3.0.0 Apr 30 00:20:30.646076 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 00:20:30.663365 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:20:30.674667 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 00:20:30.683791 lvm[1457]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:20:30.716983 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 00:20:30.731607 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:20:30.732776 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:20:30.733982 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 00:20:30.735323 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 00:20:30.736857 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 00:20:30.738098 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 00:20:30.739380 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 00:20:30.740650 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 00:20:30.740683 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:20:30.741630 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:20:30.743595 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 00:20:30.766228 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 00:20:30.779204 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 00:20:30.781848 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 00:20:30.783500 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 00:20:30.784709 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:20:30.785714 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:20:30.786727 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:20:30.786759 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:20:30.787805 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 00:20:30.790189 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 00:20:30.792543 lvm[1461]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:20:30.794548 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 00:20:30.798696 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 00:20:30.802562 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 00:20:30.802663 jq[1464]: false Apr 30 00:20:30.803994 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 00:20:30.818983 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 00:20:30.823790 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 00:20:30.826959 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 00:20:30.840739 extend-filesystems[1465]: Found loop3 Apr 30 00:20:30.840739 extend-filesystems[1465]: Found loop4 Apr 30 00:20:30.840739 extend-filesystems[1465]: Found loop5 Apr 30 00:20:30.840739 extend-filesystems[1465]: Found sr0 Apr 30 00:20:30.840739 extend-filesystems[1465]: Found vda Apr 30 00:20:30.840739 extend-filesystems[1465]: Found vda1 Apr 30 00:20:30.840739 extend-filesystems[1465]: Found vda2 Apr 30 00:20:30.840739 extend-filesystems[1465]: Found vda3 Apr 30 00:20:30.840739 extend-filesystems[1465]: Found usr Apr 30 00:20:30.840739 extend-filesystems[1465]: Found vda4 Apr 30 00:20:30.840739 extend-filesystems[1465]: Found vda6 Apr 30 00:20:30.840739 extend-filesystems[1465]: Found vda7 Apr 30 00:20:30.840739 extend-filesystems[1465]: Found vda9 Apr 30 00:20:30.840739 extend-filesystems[1465]: Checking size of /dev/vda9 Apr 30 00:20:30.846112 dbus-daemon[1463]: [system] SELinux support is enabled Apr 30 00:20:30.848757 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 00:20:30.853976 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 00:20:30.854604 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 00:20:30.855399 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 00:20:30.857575 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 00:20:30.859493 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 00:20:30.862963 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 00:20:30.866398 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 00:20:30.866727 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 00:20:30.868764 extend-filesystems[1465]: Resized partition /dev/vda9 Apr 30 00:20:30.870987 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 00:20:30.874624 extend-filesystems[1485]: resize2fs 1.47.1 (20-May-2024) Apr 30 00:20:30.871270 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 00:20:30.891973 jq[1477]: true Apr 30 00:20:30.895588 update_engine[1476]: I20250430 00:20:30.895102 1476 main.cc:92] Flatcar Update Engine starting Apr 30 00:20:30.897821 update_engine[1476]: I20250430 00:20:30.897686 1476 update_check_scheduler.cc:74] Next update check in 3m35s Apr 30 00:20:30.898489 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1379) Apr 30 00:20:30.899704 (ntainerd)[1488]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 00:20:30.903096 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 00:20:30.903146 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 00:20:30.904760 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 00:20:30.904795 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 00:20:30.908028 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 00:20:30.908272 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 00:20:30.910791 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 30 00:20:30.941946 jq[1497]: true Apr 30 00:20:30.945391 systemd[1]: Started update-engine.service - Update Engine. Apr 30 00:20:30.958914 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 00:20:31.019851 tar[1484]: linux-amd64/helm Apr 30 00:20:31.020472 systemd-logind[1472]: Watching system buttons on /dev/input/event1 (Power Button) Apr 30 00:20:31.020504 systemd-logind[1472]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 00:20:31.020935 systemd-logind[1472]: New seat seat0. Apr 30 00:20:31.022784 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 00:20:31.042685 locksmithd[1502]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 00:20:31.194478 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 30 00:20:31.222086 extend-filesystems[1485]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 30 00:20:31.222086 extend-filesystems[1485]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 30 00:20:31.222086 extend-filesystems[1485]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 30 00:20:31.227988 extend-filesystems[1465]: Resized filesystem in /dev/vda9 Apr 30 00:20:31.223897 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 00:20:31.230480 bash[1516]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:20:31.224136 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 00:20:31.231726 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 00:20:31.234490 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 30 00:20:31.313298 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 00:20:31.359589 containerd[1488]: time="2025-04-30T00:20:31.359465371Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Apr 30 00:20:31.388896 containerd[1488]: time="2025-04-30T00:20:31.388803189Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:20:31.393020 containerd[1488]: time="2025-04-30T00:20:31.392938244Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:20:31.393020 containerd[1488]: time="2025-04-30T00:20:31.392991434Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 00:20:31.393020 containerd[1488]: time="2025-04-30T00:20:31.393015038Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 00:20:31.393296 containerd[1488]: time="2025-04-30T00:20:31.393262973Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 00:20:31.393335 containerd[1488]: time="2025-04-30T00:20:31.393297708Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 00:20:31.393567 containerd[1488]: time="2025-04-30T00:20:31.393388228Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:20:31.393567 containerd[1488]: time="2025-04-30T00:20:31.393413686Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:20:31.393738 containerd[1488]: time="2025-04-30T00:20:31.393699422Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:20:31.393738 containerd[1488]: time="2025-04-30T00:20:31.393725561Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 00:20:31.393792 containerd[1488]: time="2025-04-30T00:20:31.393742923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:20:31.393792 containerd[1488]: time="2025-04-30T00:20:31.393758102Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 00:20:31.393978 containerd[1488]: time="2025-04-30T00:20:31.393905278Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:20:31.394238 containerd[1488]: time="2025-04-30T00:20:31.394202966Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:20:31.394551 containerd[1488]: time="2025-04-30T00:20:31.394365741Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:20:31.394551 containerd[1488]: time="2025-04-30T00:20:31.394395096Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 00:20:31.394551 containerd[1488]: time="2025-04-30T00:20:31.394534938Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 00:20:31.394749 containerd[1488]: time="2025-04-30T00:20:31.394604629Z" level=info msg="metadata content store policy set" policy=shared Apr 30 00:20:31.405267 containerd[1488]: time="2025-04-30T00:20:31.405192445Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 00:20:31.405426 containerd[1488]: time="2025-04-30T00:20:31.405308643Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 00:20:31.405426 containerd[1488]: time="2025-04-30T00:20:31.405332738Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 00:20:31.405426 containerd[1488]: time="2025-04-30T00:20:31.405356403Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 00:20:31.405426 containerd[1488]: time="2025-04-30T00:20:31.405375108Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 00:20:31.405686 containerd[1488]: time="2025-04-30T00:20:31.405649382Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 00:20:31.405989 containerd[1488]: time="2025-04-30T00:20:31.405952921Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 00:20:31.406127 containerd[1488]: time="2025-04-30T00:20:31.406092754Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 00:20:31.406163 containerd[1488]: time="2025-04-30T00:20:31.406127829Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 00:20:31.406163 containerd[1488]: time="2025-04-30T00:20:31.406157074Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 00:20:31.406224 containerd[1488]: time="2025-04-30T00:20:31.406177723Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 00:20:31.406224 containerd[1488]: time="2025-04-30T00:20:31.406194765Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 00:20:31.406224 containerd[1488]: time="2025-04-30T00:20:31.406211406Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 00:20:31.406304 containerd[1488]: time="2025-04-30T00:20:31.406230031Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 00:20:31.406304 containerd[1488]: time="2025-04-30T00:20:31.406249117Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 00:20:31.406304 containerd[1488]: time="2025-04-30T00:20:31.406266940Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 00:20:31.406304 containerd[1488]: time="2025-04-30T00:20:31.406297157Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 00:20:31.406407 containerd[1488]: time="2025-04-30T00:20:31.406312836Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 00:20:31.406407 containerd[1488]: time="2025-04-30T00:20:31.406337653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 00:20:31.406407 containerd[1488]: time="2025-04-30T00:20:31.406354995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 00:20:31.406407 containerd[1488]: time="2025-04-30T00:20:31.406371697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 00:20:31.406407 containerd[1488]: time="2025-04-30T00:20:31.406392696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 00:20:31.406407 containerd[1488]: time="2025-04-30T00:20:31.406408295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 00:20:31.406589 containerd[1488]: time="2025-04-30T00:20:31.406557455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 00:20:31.406589 containerd[1488]: time="2025-04-30T00:20:31.406577042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 00:20:31.406639 containerd[1488]: time="2025-04-30T00:20:31.406594635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 00:20:31.406639 containerd[1488]: time="2025-04-30T00:20:31.406613520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 00:20:31.406639 containerd[1488]: time="2025-04-30T00:20:31.406633287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 00:20:31.406717 containerd[1488]: time="2025-04-30T00:20:31.406649387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 00:20:31.406717 containerd[1488]: time="2025-04-30T00:20:31.406665848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 00:20:31.406717 containerd[1488]: time="2025-04-30T00:20:31.406681838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 00:20:31.406717 containerd[1488]: time="2025-04-30T00:20:31.406701585Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 00:20:31.406841 containerd[1488]: time="2025-04-30T00:20:31.406731802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 00:20:31.406841 containerd[1488]: time="2025-04-30T00:20:31.406757731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 00:20:31.406841 containerd[1488]: time="2025-04-30T00:20:31.406774742Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 00:20:31.406921 containerd[1488]: time="2025-04-30T00:20:31.406847709Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 00:20:31.406980 containerd[1488]: time="2025-04-30T00:20:31.406874700Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 00:20:31.407020 containerd[1488]: time="2025-04-30T00:20:31.406983033Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 00:20:31.407020 containerd[1488]: time="2025-04-30T00:20:31.407005445Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 00:20:31.407101 containerd[1488]: time="2025-04-30T00:20:31.407018419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 00:20:31.407101 containerd[1488]: time="2025-04-30T00:20:31.407047063Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 00:20:31.407101 containerd[1488]: time="2025-04-30T00:20:31.407063955Z" level=info msg="NRI interface is disabled by configuration." Apr 30 00:20:31.407101 containerd[1488]: time="2025-04-30T00:20:31.407078151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 00:20:31.407610 containerd[1488]: time="2025-04-30T00:20:31.407433037Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 00:20:31.407610 containerd[1488]: time="2025-04-30T00:20:31.407520451Z" level=info msg="Connect containerd service" Apr 30 00:20:31.407610 containerd[1488]: time="2025-04-30T00:20:31.407560175Z" level=info msg="using legacy CRI server" Apr 30 00:20:31.407610 containerd[1488]: time="2025-04-30T00:20:31.407572328Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 00:20:31.407890 containerd[1488]: time="2025-04-30T00:20:31.407727950Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 00:20:31.409087 containerd[1488]: time="2025-04-30T00:20:31.408626545Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:20:31.409087 containerd[1488]: time="2025-04-30T00:20:31.408775615Z" level=info msg="Start subscribing containerd event" Apr 30 00:20:31.409087 containerd[1488]: time="2025-04-30T00:20:31.408837240Z" level=info msg="Start recovering state" Apr 30 00:20:31.409087 containerd[1488]: time="2025-04-30T00:20:31.408985549Z" level=info msg="Start event monitor" Apr 30 00:20:31.409087 containerd[1488]: time="2025-04-30T00:20:31.409008431Z" level=info msg="Start snapshots syncer" Apr 30 00:20:31.409087 containerd[1488]: time="2025-04-30T00:20:31.409023830Z" level=info msg="Start cni network conf syncer for default" Apr 30 00:20:31.409087 containerd[1488]: time="2025-04-30T00:20:31.409033338Z" level=info msg="Start streaming server" Apr 30 00:20:31.409640 containerd[1488]: time="2025-04-30T00:20:31.409601293Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 00:20:31.409831 containerd[1488]: time="2025-04-30T00:20:31.409671635Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 00:20:31.411659 containerd[1488]: time="2025-04-30T00:20:31.411631050Z" level=info msg="containerd successfully booted in 0.054675s" Apr 30 00:20:31.411711 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 00:20:31.456968 tar[1484]: linux-amd64/LICENSE Apr 30 00:20:31.456968 tar[1484]: linux-amd64/README.md Apr 30 00:20:31.466933 sshd_keygen[1483]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 00:20:31.479046 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 00:20:31.497979 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 00:20:31.509851 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 00:20:31.512556 systemd[1]: Started sshd@0-10.0.0.119:22-10.0.0.1:51920.service - OpenSSH per-connection server daemon (10.0.0.1:51920). Apr 30 00:20:31.518078 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 00:20:31.518393 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 00:20:31.522269 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 00:20:31.540659 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 00:20:31.547829 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 00:20:31.551074 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 00:20:31.552800 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 00:20:31.576940 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 51920 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:20:31.579739 sshd-session[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:20:31.591190 systemd-logind[1472]: New session 1 of user core. Apr 30 00:20:31.592880 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 00:20:31.605737 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 00:20:31.618217 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 00:20:31.634729 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 00:20:31.639020 (systemd)[1558]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 00:20:31.750008 systemd[1558]: Queued start job for default target default.target. Apr 30 00:20:31.761828 systemd[1558]: Created slice app.slice - User Application Slice. Apr 30 00:20:31.761856 systemd[1558]: Reached target paths.target - Paths. Apr 30 00:20:31.761869 systemd[1558]: Reached target timers.target - Timers. Apr 30 00:20:31.763535 systemd[1558]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 00:20:31.776257 systemd[1558]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 00:20:31.776439 systemd[1558]: Reached target sockets.target - Sockets. Apr 30 00:20:31.776495 systemd[1558]: Reached target basic.target - Basic System. Apr 30 00:20:31.776541 systemd[1558]: Reached target default.target - Main User Target. Apr 30 00:20:31.776576 systemd[1558]: Startup finished in 130ms. Apr 30 00:20:31.777094 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 00:20:31.780301 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 00:20:31.840804 systemd[1]: Started sshd@1-10.0.0.119:22-10.0.0.1:51932.service - OpenSSH per-connection server daemon (10.0.0.1:51932). Apr 30 00:20:31.880917 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 51932 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:20:31.882326 sshd-session[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:20:31.886458 systemd-logind[1472]: New session 2 of user core. Apr 30 00:20:31.897628 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 00:20:31.954826 sshd[1571]: Connection closed by 10.0.0.1 port 51932 Apr 30 00:20:31.955324 sshd-session[1569]: pam_unix(sshd:session): session closed for user core Apr 30 00:20:31.975084 systemd[1]: sshd@1-10.0.0.119:22-10.0.0.1:51932.service: Deactivated successfully. Apr 30 00:20:31.976827 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 00:20:31.978578 systemd-logind[1472]: Session 2 logged out. Waiting for processes to exit. Apr 30 00:20:31.992779 systemd[1]: Started sshd@2-10.0.0.119:22-10.0.0.1:51936.service - OpenSSH per-connection server daemon (10.0.0.1:51936). Apr 30 00:20:31.995415 systemd-logind[1472]: Removed session 2. Apr 30 00:20:32.032056 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 51936 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:20:32.033728 sshd-session[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:20:32.038034 systemd-logind[1472]: New session 3 of user core. Apr 30 00:20:32.051584 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 00:20:32.110664 sshd[1578]: Connection closed by 10.0.0.1 port 51936 Apr 30 00:20:32.110950 sshd-session[1576]: pam_unix(sshd:session): session closed for user core Apr 30 00:20:32.115112 systemd[1]: sshd@2-10.0.0.119:22-10.0.0.1:51936.service: Deactivated successfully. Apr 30 00:20:32.117019 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 00:20:32.117630 systemd-logind[1472]: Session 3 logged out. Waiting for processes to exit. Apr 30 00:20:32.118521 systemd-logind[1472]: Removed session 3. Apr 30 00:20:32.522714 systemd-networkd[1421]: eth0: Gained IPv6LL Apr 30 00:20:32.526733 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 00:20:32.529020 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 00:20:32.539683 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 30 00:20:32.542352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:20:32.544943 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 00:20:32.567436 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 00:20:32.570516 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 30 00:20:32.570767 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 30 00:20:32.572428 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 00:20:33.807487 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:20:33.822789 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 00:20:33.825927 systemd[1]: Startup finished in 907ms (kernel) + 8.105s (initrd) + 7.389s (userspace) = 16.401s. Apr 30 00:20:33.829257 (kubelet)[1603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:20:34.566148 kubelet[1603]: E0430 00:20:34.566012 1603 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:20:34.570843 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:20:34.571066 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:20:34.571433 systemd[1]: kubelet.service: Consumed 1.241s CPU time. Apr 30 00:20:42.123219 systemd[1]: Started sshd@3-10.0.0.119:22-10.0.0.1:46428.service - OpenSSH per-connection server daemon (10.0.0.1:46428). Apr 30 00:20:42.160897 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 46428 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:20:42.162356 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:20:42.166220 systemd-logind[1472]: New session 4 of user core. Apr 30 00:20:42.175580 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 00:20:42.229401 sshd[1619]: Connection closed by 10.0.0.1 port 46428 Apr 30 00:20:42.229801 sshd-session[1617]: pam_unix(sshd:session): session closed for user core Apr 30 00:20:42.244768 systemd[1]: sshd@3-10.0.0.119:22-10.0.0.1:46428.service: Deactivated successfully. Apr 30 00:20:42.247035 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 00:20:42.249051 systemd-logind[1472]: Session 4 logged out. Waiting for processes to exit. Apr 30 00:20:42.256789 systemd[1]: Started sshd@4-10.0.0.119:22-10.0.0.1:46436.service - OpenSSH per-connection server daemon (10.0.0.1:46436). Apr 30 00:20:42.257773 systemd-logind[1472]: Removed session 4. Apr 30 00:20:42.289667 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 46436 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:20:42.291044 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:20:42.294650 systemd-logind[1472]: New session 5 of user core. Apr 30 00:20:42.308583 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 00:20:42.358011 sshd[1626]: Connection closed by 10.0.0.1 port 46436 Apr 30 00:20:42.358495 sshd-session[1624]: pam_unix(sshd:session): session closed for user core Apr 30 00:20:42.376277 systemd[1]: sshd@4-10.0.0.119:22-10.0.0.1:46436.service: Deactivated successfully. Apr 30 00:20:42.378207 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 00:20:42.379884 systemd-logind[1472]: Session 5 logged out. Waiting for processes to exit. Apr 30 00:20:42.390639 systemd[1]: Started sshd@5-10.0.0.119:22-10.0.0.1:46442.service - OpenSSH per-connection server daemon (10.0.0.1:46442). Apr 30 00:20:42.392067 systemd-logind[1472]: Removed session 5. Apr 30 00:20:42.425375 sshd[1631]: Accepted publickey for core from 10.0.0.1 port 46442 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:20:42.426882 sshd-session[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:20:42.431008 systemd-logind[1472]: New session 6 of user core. Apr 30 00:20:42.440585 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 00:20:42.494711 sshd[1633]: Connection closed by 10.0.0.1 port 46442 Apr 30 00:20:42.495187 sshd-session[1631]: pam_unix(sshd:session): session closed for user core Apr 30 00:20:42.506408 systemd[1]: sshd@5-10.0.0.119:22-10.0.0.1:46442.service: Deactivated successfully. Apr 30 00:20:42.508619 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 00:20:42.510140 systemd-logind[1472]: Session 6 logged out. Waiting for processes to exit. Apr 30 00:20:42.522729 systemd[1]: Started sshd@6-10.0.0.119:22-10.0.0.1:46452.service - OpenSSH per-connection server daemon (10.0.0.1:46452). Apr 30 00:20:42.523698 systemd-logind[1472]: Removed session 6. Apr 30 00:20:42.555742 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 46452 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:20:42.557291 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:20:42.561571 systemd-logind[1472]: New session 7 of user core. Apr 30 00:20:42.572594 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 00:20:42.632112 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 00:20:42.632491 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:20:42.647687 sudo[1641]: pam_unix(sudo:session): session closed for user root Apr 30 00:20:42.649350 sshd[1640]: Connection closed by 10.0.0.1 port 46452 Apr 30 00:20:42.649845 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Apr 30 00:20:42.663288 systemd[1]: sshd@6-10.0.0.119:22-10.0.0.1:46452.service: Deactivated successfully. Apr 30 00:20:42.665186 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 00:20:42.666903 systemd-logind[1472]: Session 7 logged out. Waiting for processes to exit. Apr 30 00:20:42.677806 systemd[1]: Started sshd@7-10.0.0.119:22-10.0.0.1:46460.service - OpenSSH per-connection server daemon (10.0.0.1:46460). Apr 30 00:20:42.678843 systemd-logind[1472]: Removed session 7. Apr 30 00:20:42.710072 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 46460 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:20:42.711896 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:20:42.715867 systemd-logind[1472]: New session 8 of user core. Apr 30 00:20:42.725590 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 00:20:42.779313 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 00:20:42.779684 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:20:42.783577 sudo[1650]: pam_unix(sudo:session): session closed for user root Apr 30 00:20:42.790280 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 30 00:20:42.790655 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:20:42.806844 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 00:20:42.839018 augenrules[1672]: No rules Apr 30 00:20:42.841042 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:20:42.841304 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 00:20:42.842713 sudo[1649]: pam_unix(sudo:session): session closed for user root Apr 30 00:20:42.844336 sshd[1648]: Connection closed by 10.0.0.1 port 46460 Apr 30 00:20:42.844813 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Apr 30 00:20:42.862263 systemd[1]: sshd@7-10.0.0.119:22-10.0.0.1:46460.service: Deactivated successfully. Apr 30 00:20:42.864155 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 00:20:42.865898 systemd-logind[1472]: Session 8 logged out. Waiting for processes to exit. Apr 30 00:20:42.875679 systemd[1]: Started sshd@8-10.0.0.119:22-10.0.0.1:46466.service - OpenSSH per-connection server daemon (10.0.0.1:46466). Apr 30 00:20:42.876709 systemd-logind[1472]: Removed session 8. Apr 30 00:20:42.908743 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 46466 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:20:42.910543 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:20:42.914611 systemd-logind[1472]: New session 9 of user core. Apr 30 00:20:42.924583 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 00:20:42.978108 sudo[1683]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 00:20:42.978479 sudo[1683]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:20:43.248658 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 00:20:43.248904 (dockerd)[1703]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 00:20:43.502295 dockerd[1703]: time="2025-04-30T00:20:43.502121285Z" level=info msg="Starting up" Apr 30 00:20:43.936383 dockerd[1703]: time="2025-04-30T00:20:43.936233220Z" level=info msg="Loading containers: start." Apr 30 00:20:44.140479 kernel: Initializing XFRM netlink socket Apr 30 00:20:44.231845 systemd-networkd[1421]: docker0: Link UP Apr 30 00:20:44.278418 dockerd[1703]: time="2025-04-30T00:20:44.278357394Z" level=info msg="Loading containers: done." Apr 30 00:20:44.294212 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2633617713-merged.mount: Deactivated successfully. Apr 30 00:20:44.297622 dockerd[1703]: time="2025-04-30T00:20:44.297572580Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 00:20:44.297714 dockerd[1703]: time="2025-04-30T00:20:44.297679891Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Apr 30 00:20:44.297851 dockerd[1703]: time="2025-04-30T00:20:44.297825845Z" level=info msg="Daemon has completed initialization" Apr 30 00:20:44.393399 dockerd[1703]: time="2025-04-30T00:20:44.393312707Z" level=info msg="API listen on /run/docker.sock" Apr 30 00:20:44.393624 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 00:20:44.821282 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 00:20:44.831619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:20:45.008374 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:20:45.013593 (kubelet)[1906]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:20:45.055945 kubelet[1906]: E0430 00:20:45.055839 1906 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:20:45.064331 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:20:45.064682 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:20:45.162642 containerd[1488]: time="2025-04-30T00:20:45.162511309Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" Apr 30 00:20:47.859939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount280204701.mount: Deactivated successfully. Apr 30 00:20:50.364205 containerd[1488]: time="2025-04-30T00:20:50.364110724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:50.365615 containerd[1488]: time="2025-04-30T00:20:50.365561906Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" Apr 30 00:20:50.368770 containerd[1488]: time="2025-04-30T00:20:50.368712665Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:50.375741 containerd[1488]: time="2025-04-30T00:20:50.375690921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:50.376796 containerd[1488]: time="2025-04-30T00:20:50.376718629Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 5.214160923s" Apr 30 00:20:50.376796 containerd[1488]: time="2025-04-30T00:20:50.376782198Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" Apr 30 00:20:50.378285 containerd[1488]: time="2025-04-30T00:20:50.378244470Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" Apr 30 00:20:52.863903 containerd[1488]: time="2025-04-30T00:20:52.863813209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:52.864481 containerd[1488]: time="2025-04-30T00:20:52.864403265Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" Apr 30 00:20:52.868035 containerd[1488]: time="2025-04-30T00:20:52.867977739Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:52.871588 containerd[1488]: time="2025-04-30T00:20:52.871540069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:52.873050 containerd[1488]: time="2025-04-30T00:20:52.872970983Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 2.494699341s" Apr 30 00:20:52.873050 containerd[1488]: time="2025-04-30T00:20:52.873051073Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" Apr 30 00:20:52.873548 containerd[1488]: time="2025-04-30T00:20:52.873516135Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" Apr 30 00:20:55.314820 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 00:20:55.324619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:20:55.473221 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:20:55.477801 (kubelet)[1987]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:20:55.519939 kubelet[1987]: E0430 00:20:55.519861 1987 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:20:55.524577 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:20:55.524875 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:20:56.921884 containerd[1488]: time="2025-04-30T00:20:56.921800824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:56.950808 containerd[1488]: time="2025-04-30T00:20:56.950722292Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" Apr 30 00:20:56.995245 containerd[1488]: time="2025-04-30T00:20:56.995200865Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:57.052619 containerd[1488]: time="2025-04-30T00:20:57.052552059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:20:57.053768 containerd[1488]: time="2025-04-30T00:20:57.053738414Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 4.180190569s" Apr 30 00:20:57.053768 containerd[1488]: time="2025-04-30T00:20:57.053772779Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" Apr 30 00:20:57.054264 containerd[1488]: time="2025-04-30T00:20:57.054186414Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" Apr 30 00:20:59.521033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2516434266.mount: Deactivated successfully. Apr 30 00:21:00.778591 containerd[1488]: time="2025-04-30T00:21:00.778511573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:21:00.781627 containerd[1488]: time="2025-04-30T00:21:00.781579457Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" Apr 30 00:21:00.787123 containerd[1488]: time="2025-04-30T00:21:00.787080894Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:21:00.792754 containerd[1488]: time="2025-04-30T00:21:00.792705051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:21:00.793495 containerd[1488]: time="2025-04-30T00:21:00.793462271Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 3.739215213s" Apr 30 00:21:00.793495 containerd[1488]: time="2025-04-30T00:21:00.793489342Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" Apr 30 00:21:00.793965 containerd[1488]: time="2025-04-30T00:21:00.793939727Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 00:21:01.266249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1130694139.mount: Deactivated successfully. Apr 30 00:21:03.190827 containerd[1488]: time="2025-04-30T00:21:03.190749220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:21:03.221958 containerd[1488]: time="2025-04-30T00:21:03.221858515Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Apr 30 00:21:03.235034 containerd[1488]: time="2025-04-30T00:21:03.234972431Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:21:03.287545 containerd[1488]: time="2025-04-30T00:21:03.287480460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:21:03.288564 containerd[1488]: time="2025-04-30T00:21:03.288537738Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.494568876s" Apr 30 00:21:03.288564 containerd[1488]: time="2025-04-30T00:21:03.288565551Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 00:21:03.289137 containerd[1488]: time="2025-04-30T00:21:03.288968117Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 00:21:03.917913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3370121172.mount: Deactivated successfully. Apr 30 00:21:03.924410 containerd[1488]: time="2025-04-30T00:21:03.924334490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:21:03.925542 containerd[1488]: time="2025-04-30T00:21:03.925492692Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Apr 30 00:21:03.926920 containerd[1488]: time="2025-04-30T00:21:03.926880907Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:21:03.929488 containerd[1488]: time="2025-04-30T00:21:03.929421751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:21:03.930067 containerd[1488]: time="2025-04-30T00:21:03.930025975Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 641.034584ms" Apr 30 00:21:03.930067 containerd[1488]: time="2025-04-30T00:21:03.930056905Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 30 00:21:03.930749 containerd[1488]: time="2025-04-30T00:21:03.930703371Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Apr 30 00:21:04.502690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount465199198.mount: Deactivated successfully. Apr 30 00:21:05.551221 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 00:21:05.563090 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:21:05.759127 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:21:05.765152 (kubelet)[2111]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:21:05.811126 kubelet[2111]: E0430 00:21:05.810831 2111 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:21:05.815944 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:21:05.816182 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:21:08.529174 containerd[1488]: time="2025-04-30T00:21:08.529073843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:21:08.545007 containerd[1488]: time="2025-04-30T00:21:08.544894400Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Apr 30 00:21:08.569876 containerd[1488]: time="2025-04-30T00:21:08.569798589Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:21:08.611317 containerd[1488]: time="2025-04-30T00:21:08.611252248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:21:08.612818 containerd[1488]: time="2025-04-30T00:21:08.612774780Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.682021433s" Apr 30 00:21:08.612818 containerd[1488]: time="2025-04-30T00:21:08.612816600Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Apr 30 00:21:10.896049 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:21:10.912803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:21:10.943235 systemd[1]: Reloading requested from client PID 2151 ('systemctl') (unit session-9.scope)... Apr 30 00:21:10.943258 systemd[1]: Reloading... Apr 30 00:21:11.083521 zram_generator::config[2190]: No configuration found. Apr 30 00:21:11.572195 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:21:11.655141 systemd[1]: Reloading finished in 711 ms. Apr 30 00:21:11.720655 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:21:11.723858 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:21:11.724122 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:21:11.741882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:21:11.896464 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:21:11.901325 (kubelet)[2240]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:21:11.970335 kubelet[2240]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:21:11.970335 kubelet[2240]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:21:11.970335 kubelet[2240]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:21:11.970778 kubelet[2240]: I0430 00:21:11.970362 2240 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:21:12.643920 kubelet[2240]: I0430 00:21:12.643862 2240 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Apr 30 00:21:12.643920 kubelet[2240]: I0430 00:21:12.643902 2240 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:21:12.644158 kubelet[2240]: I0430 00:21:12.644137 2240 server.go:929] "Client rotation is on, will bootstrap in background" Apr 30 00:21:12.667673 kubelet[2240]: E0430 00:21:12.666953 2240 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.119:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:21:12.667673 kubelet[2240]: I0430 00:21:12.667136 2240 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:21:12.812358 kubelet[2240]: E0430 00:21:12.812304 2240 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 00:21:12.812358 kubelet[2240]: I0430 00:21:12.812333 2240 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 00:21:12.818904 kubelet[2240]: I0430 00:21:12.818875 2240 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:21:12.846224 kubelet[2240]: I0430 00:21:12.846180 2240 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Apr 30 00:21:12.846431 kubelet[2240]: I0430 00:21:12.846382 2240 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:21:12.846654 kubelet[2240]: I0430 00:21:12.846420 2240 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 00:21:12.846654 kubelet[2240]: I0430 00:21:12.846652 2240 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:21:12.846850 kubelet[2240]: I0430 00:21:12.846662 2240 container_manager_linux.go:300] "Creating device plugin manager" Apr 30 00:21:12.846850 kubelet[2240]: I0430 00:21:12.846772 2240 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:21:12.848590 kubelet[2240]: I0430 00:21:12.848557 2240 kubelet.go:408] "Attempting to sync node with API server" Apr 30 00:21:12.848590 kubelet[2240]: I0430 00:21:12.848585 2240 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:21:12.848657 kubelet[2240]: I0430 00:21:12.848623 2240 kubelet.go:314] "Adding apiserver pod source" Apr 30 00:21:12.848657 kubelet[2240]: I0430 00:21:12.848641 2240 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:21:12.854479 kubelet[2240]: W0430 00:21:12.852365 2240 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Apr 30 00:21:12.854479 kubelet[2240]: E0430 00:21:12.852438 2240 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:21:12.855319 kubelet[2240]: W0430 00:21:12.855258 2240 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Apr 30 00:21:12.855319 kubelet[2240]: E0430 00:21:12.855313 2240 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:21:12.856674 kubelet[2240]: I0430 00:21:12.856635 2240 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 00:21:12.858149 kubelet[2240]: I0430 00:21:12.858133 2240 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:21:12.859308 kubelet[2240]: W0430 00:21:12.859281 2240 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 00:21:12.860391 kubelet[2240]: I0430 00:21:12.860019 2240 server.go:1269] "Started kubelet" Apr 30 00:21:12.860391 kubelet[2240]: I0430 00:21:12.860080 2240 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:21:12.861135 kubelet[2240]: I0430 00:21:12.861112 2240 server.go:460] "Adding debug handlers to kubelet server" Apr 30 00:21:12.861135 kubelet[2240]: I0430 00:21:12.861107 2240 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:21:12.863791 kubelet[2240]: I0430 00:21:12.863766 2240 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:21:12.864020 kubelet[2240]: I0430 00:21:12.863994 2240 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:21:12.868513 kubelet[2240]: I0430 00:21:12.865166 2240 volume_manager.go:289] "Starting Kubelet Volume Manager" Apr 30 00:21:12.868513 kubelet[2240]: I0430 00:21:12.865310 2240 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 00:21:12.868513 kubelet[2240]: I0430 00:21:12.865469 2240 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 30 00:21:12.868513 kubelet[2240]: I0430 00:21:12.865706 2240 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:21:12.868513 kubelet[2240]: E0430 00:21:12.866030 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:12.868513 kubelet[2240]: W0430 00:21:12.866171 2240 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Apr 30 00:21:12.868513 kubelet[2240]: E0430 00:21:12.866224 2240 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:21:12.868513 kubelet[2240]: E0430 00:21:12.866277 2240 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="200ms" Apr 30 00:21:12.869709 kubelet[2240]: I0430 00:21:12.869344 2240 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:21:12.869709 kubelet[2240]: I0430 00:21:12.869465 2240 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:21:12.870674 kubelet[2240]: E0430 00:21:12.866419 2240 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.119:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.119:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183af0b6a86e2bf9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-04-30 00:21:12.859995129 +0000 UTC m=+0.954894609,LastTimestamp:2025-04-30 00:21:12.859995129 +0000 UTC m=+0.954894609,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 30 00:21:12.872728 kubelet[2240]: E0430 00:21:12.872701 2240 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:21:12.875647 kubelet[2240]: I0430 00:21:12.875584 2240 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:21:12.892622 kubelet[2240]: I0430 00:21:12.892302 2240 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:21:12.892622 kubelet[2240]: I0430 00:21:12.892323 2240 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:21:12.892622 kubelet[2240]: I0430 00:21:12.892343 2240 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:21:12.895435 kubelet[2240]: I0430 00:21:12.895362 2240 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:21:12.897314 kubelet[2240]: I0430 00:21:12.897225 2240 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:21:12.897314 kubelet[2240]: I0430 00:21:12.897269 2240 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:21:12.897415 kubelet[2240]: I0430 00:21:12.897326 2240 kubelet.go:2321] "Starting kubelet main sync loop" Apr 30 00:21:12.897498 kubelet[2240]: E0430 00:21:12.897434 2240 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:21:12.898234 kubelet[2240]: W0430 00:21:12.898190 2240 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Apr 30 00:21:12.898298 kubelet[2240]: E0430 00:21:12.898237 2240 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:21:12.966118 kubelet[2240]: E0430 00:21:12.966069 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:12.998506 kubelet[2240]: E0430 00:21:12.998428 2240 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 00:21:13.066779 kubelet[2240]: E0430 00:21:13.066731 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:13.067189 kubelet[2240]: E0430 00:21:13.067132 2240 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="400ms" Apr 30 00:21:13.167665 kubelet[2240]: E0430 00:21:13.167492 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:13.198780 kubelet[2240]: E0430 00:21:13.198723 2240 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 00:21:13.268272 kubelet[2240]: E0430 00:21:13.268204 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:13.369082 kubelet[2240]: E0430 00:21:13.369007 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:13.467883 kubelet[2240]: E0430 00:21:13.467757 2240 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="800ms" Apr 30 00:21:13.469845 kubelet[2240]: E0430 00:21:13.469784 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:13.570280 kubelet[2240]: E0430 00:21:13.570231 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:13.599533 kubelet[2240]: E0430 00:21:13.599479 2240 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 00:21:13.670970 kubelet[2240]: E0430 00:21:13.670927 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:13.760947 kubelet[2240]: W0430 00:21:13.760769 2240 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Apr 30 00:21:13.760947 kubelet[2240]: E0430 00:21:13.760860 2240 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:21:13.771458 kubelet[2240]: E0430 00:21:13.771411 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:13.771760 kubelet[2240]: W0430 00:21:13.771711 2240 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Apr 30 00:21:13.771796 kubelet[2240]: E0430 00:21:13.771768 2240 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:21:13.860247 kubelet[2240]: W0430 00:21:13.860184 2240 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Apr 30 00:21:13.860247 kubelet[2240]: E0430 00:21:13.860241 2240 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:21:13.871822 kubelet[2240]: E0430 00:21:13.871770 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:13.972368 kubelet[2240]: E0430 00:21:13.972280 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:14.072905 kubelet[2240]: E0430 00:21:14.072836 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:14.173485 kubelet[2240]: E0430 00:21:14.173391 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:14.180164 kubelet[2240]: W0430 00:21:14.180072 2240 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Apr 30 00:21:14.180164 kubelet[2240]: E0430 00:21:14.180151 2240 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:21:14.269329 kubelet[2240]: E0430 00:21:14.269252 2240 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="1.6s" Apr 30 00:21:14.274439 kubelet[2240]: E0430 00:21:14.274374 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:14.375654 kubelet[2240]: E0430 00:21:14.375349 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:14.399733 kubelet[2240]: E0430 00:21:14.399643 2240 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 00:21:14.476113 kubelet[2240]: E0430 00:21:14.476049 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:14.576693 kubelet[2240]: E0430 00:21:14.576587 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:14.677313 kubelet[2240]: E0430 00:21:14.677095 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:14.679027 kubelet[2240]: E0430 00:21:14.678985 2240 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.119:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:21:14.777755 kubelet[2240]: E0430 00:21:14.777655 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:14.878341 kubelet[2240]: E0430 00:21:14.878265 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:14.979119 kubelet[2240]: E0430 00:21:14.978923 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:15.079647 kubelet[2240]: E0430 00:21:15.079577 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:15.180133 kubelet[2240]: E0430 00:21:15.180051 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:15.193648 kubelet[2240]: I0430 00:21:15.193611 2240 policy_none.go:49] "None policy: Start" Apr 30 00:21:15.194424 kubelet[2240]: I0430 00:21:15.194395 2240 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:21:15.194424 kubelet[2240]: I0430 00:21:15.194417 2240 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:21:15.281214 kubelet[2240]: E0430 00:21:15.281031 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:15.381132 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 00:21:15.381694 kubelet[2240]: E0430 00:21:15.381663 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:15.428519 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 00:21:15.432106 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 00:21:15.442678 kubelet[2240]: I0430 00:21:15.442630 2240 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:21:15.442930 kubelet[2240]: I0430 00:21:15.442904 2240 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 00:21:15.442982 kubelet[2240]: I0430 00:21:15.442920 2240 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:21:15.443175 kubelet[2240]: I0430 00:21:15.443150 2240 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:21:15.444306 kubelet[2240]: E0430 00:21:15.444282 2240 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 30 00:21:15.529029 kubelet[2240]: E0430 00:21:15.528910 2240 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.119:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.119:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183af0b6a86e2bf9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-04-30 00:21:12.859995129 +0000 UTC m=+0.954894609,LastTimestamp:2025-04-30 00:21:12.859995129 +0000 UTC m=+0.954894609,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 30 00:21:15.545136 kubelet[2240]: I0430 00:21:15.545012 2240 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Apr 30 00:21:15.545239 kubelet[2240]: E0430 00:21:15.545197 2240 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Apr 30 00:21:15.746720 kubelet[2240]: I0430 00:21:15.746674 2240 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Apr 30 00:21:15.747174 kubelet[2240]: E0430 00:21:15.747124 2240 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Apr 30 00:21:15.788943 kubelet[2240]: W0430 00:21:15.788873 2240 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Apr 30 00:21:15.788943 kubelet[2240]: E0430 00:21:15.788922 2240 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:21:15.870108 kubelet[2240]: E0430 00:21:15.870029 2240 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="3.2s" Apr 30 00:21:16.008990 systemd[1]: Created slice kubepods-burstable-pod3b76869aa4bfe2ca3bd2a46e34e222ef.slice - libcontainer container kubepods-burstable-pod3b76869aa4bfe2ca3bd2a46e34e222ef.slice. Apr 30 00:21:16.028493 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. Apr 30 00:21:16.046994 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. Apr 30 00:21:16.084678 kubelet[2240]: I0430 00:21:16.084602 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:21:16.084678 kubelet[2240]: I0430 00:21:16.084655 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:21:16.084678 kubelet[2240]: I0430 00:21:16.084675 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" Apr 30 00:21:16.084678 kubelet[2240]: I0430 00:21:16.084690 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b76869aa4bfe2ca3bd2a46e34e222ef-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3b76869aa4bfe2ca3bd2a46e34e222ef\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:21:16.085314 kubelet[2240]: I0430 00:21:16.084762 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:21:16.085314 kubelet[2240]: I0430 00:21:16.084812 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:21:16.085314 kubelet[2240]: I0430 00:21:16.084844 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:21:16.085314 kubelet[2240]: I0430 00:21:16.084865 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b76869aa4bfe2ca3bd2a46e34e222ef-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3b76869aa4bfe2ca3bd2a46e34e222ef\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:21:16.085314 kubelet[2240]: I0430 00:21:16.084903 2240 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b76869aa4bfe2ca3bd2a46e34e222ef-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3b76869aa4bfe2ca3bd2a46e34e222ef\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:21:16.149106 kubelet[2240]: I0430 00:21:16.148974 2240 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Apr 30 00:21:16.149382 kubelet[2240]: E0430 00:21:16.149351 2240 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Apr 30 00:21:16.327111 kubelet[2240]: E0430 00:21:16.327045 2240 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:16.327935 containerd[1488]: time="2025-04-30T00:21:16.327881704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3b76869aa4bfe2ca3bd2a46e34e222ef,Namespace:kube-system,Attempt:0,}" Apr 30 00:21:16.345206 kubelet[2240]: E0430 00:21:16.345164 2240 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:16.345731 containerd[1488]: time="2025-04-30T00:21:16.345688021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" Apr 30 00:21:16.349933 kubelet[2240]: E0430 00:21:16.349901 2240 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:16.350258 containerd[1488]: time="2025-04-30T00:21:16.350218498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" Apr 30 00:21:16.432127 update_engine[1476]: I20250430 00:21:16.426151 1476 update_attempter.cc:509] Updating boot flags... Apr 30 00:21:16.444421 kubelet[2240]: W0430 00:21:16.444296 2240 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Apr 30 00:21:16.444421 kubelet[2240]: E0430 00:21:16.444349 2240 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:21:16.518517 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2281) Apr 30 00:21:16.647386 kubelet[2240]: W0430 00:21:16.647294 2240 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Apr 30 00:21:16.647616 kubelet[2240]: E0430 00:21:16.647395 2240 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:21:16.678159 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2283) Apr 30 00:21:16.776420 kubelet[2240]: W0430 00:21:16.776094 2240 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Apr 30 00:21:16.776420 kubelet[2240]: E0430 00:21:16.776181 2240 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:21:16.954751 kubelet[2240]: I0430 00:21:16.954700 2240 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Apr 30 00:21:16.955055 kubelet[2240]: E0430 00:21:16.955036 2240 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Apr 30 00:21:18.557330 kubelet[2240]: I0430 00:21:18.557261 2240 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Apr 30 00:21:18.565516 kubelet[2240]: E0430 00:21:18.558496 2240 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Apr 30 00:21:18.586688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1068459795.mount: Deactivated successfully. Apr 30 00:21:18.767566 containerd[1488]: time="2025-04-30T00:21:18.767420716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:21:18.803889 kubelet[2240]: E0430 00:21:18.803804 2240 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.119:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:21:18.947015 containerd[1488]: time="2025-04-30T00:21:18.946764751Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 30 00:21:19.007342 containerd[1488]: time="2025-04-30T00:21:19.007257874Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:21:19.071616 kubelet[2240]: E0430 00:21:19.071541 2240 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="6.4s" Apr 30 00:21:19.125685 containerd[1488]: time="2025-04-30T00:21:19.125572244Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:21:19.168963 containerd[1488]: time="2025-04-30T00:21:19.168838875Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:21:19.225015 containerd[1488]: time="2025-04-30T00:21:19.224828979Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:21:19.317859 containerd[1488]: time="2025-04-30T00:21:19.317708056Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:21:19.368011 containerd[1488]: time="2025-04-30T00:21:19.367923954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:21:19.369067 containerd[1488]: time="2025-04-30T00:21:19.368995985Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.040997317s" Apr 30 00:21:19.371021 containerd[1488]: time="2025-04-30T00:21:19.370967498Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.02518208s" Apr 30 00:21:19.883251 kubelet[2240]: W0430 00:21:19.883183 2240 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Apr 30 00:21:19.883251 kubelet[2240]: E0430 00:21:19.883248 2240 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:21:19.896361 containerd[1488]: time="2025-04-30T00:21:19.896307575Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.546002694s" Apr 30 00:21:20.921960 containerd[1488]: time="2025-04-30T00:21:20.921673545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:21:20.921960 containerd[1488]: time="2025-04-30T00:21:20.921766691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:21:20.921960 containerd[1488]: time="2025-04-30T00:21:20.921786509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:21:20.921960 containerd[1488]: time="2025-04-30T00:21:20.921906616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:21:20.922598 containerd[1488]: time="2025-04-30T00:21:20.922104109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:21:20.922598 containerd[1488]: time="2025-04-30T00:21:20.922170314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:21:20.922598 containerd[1488]: time="2025-04-30T00:21:20.922184111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:21:20.922598 containerd[1488]: time="2025-04-30T00:21:20.922291684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:21:20.971654 systemd[1]: Started cri-containerd-a16c09d3c4f0b5b0df2bfa8969c60e3b2e5dfc92a8c57962c0db1c7b22ed8cfb.scope - libcontainer container a16c09d3c4f0b5b0df2bfa8969c60e3b2e5dfc92a8c57962c0db1c7b22ed8cfb. Apr 30 00:21:20.979365 systemd[1]: Started cri-containerd-c20a4708fc8e95fae81e9b122e0178019d6564e112127d9a2d175ffea3e6db81.scope - libcontainer container c20a4708fc8e95fae81e9b122e0178019d6564e112127d9a2d175ffea3e6db81. Apr 30 00:21:21.030721 containerd[1488]: time="2025-04-30T00:21:21.030673161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3b76869aa4bfe2ca3bd2a46e34e222ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"a16c09d3c4f0b5b0df2bfa8969c60e3b2e5dfc92a8c57962c0db1c7b22ed8cfb\"" Apr 30 00:21:21.031958 kubelet[2240]: E0430 00:21:21.031926 2240 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:21.033968 containerd[1488]: time="2025-04-30T00:21:21.033945139Z" level=info msg="CreateContainer within sandbox \"a16c09d3c4f0b5b0df2bfa8969c60e3b2e5dfc92a8c57962c0db1c7b22ed8cfb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 00:21:21.037341 containerd[1488]: time="2025-04-30T00:21:21.037286779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"c20a4708fc8e95fae81e9b122e0178019d6564e112127d9a2d175ffea3e6db81\"" Apr 30 00:21:21.037855 kubelet[2240]: E0430 00:21:21.037828 2240 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:21.039399 containerd[1488]: time="2025-04-30T00:21:21.039373837Z" level=info msg="CreateContainer within sandbox \"c20a4708fc8e95fae81e9b122e0178019d6564e112127d9a2d175ffea3e6db81\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 00:21:21.226229 kubelet[2240]: W0430 00:21:21.226037 2240 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Apr 30 00:21:21.226229 kubelet[2240]: E0430 00:21:21.226120 2240 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:21:21.480963 kubelet[2240]: W0430 00:21:21.480800 2240 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Apr 30 00:21:21.480963 kubelet[2240]: E0430 00:21:21.480876 2240 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:21:21.559848 kubelet[2240]: W0430 00:21:21.559800 2240 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Apr 30 00:21:21.559848 kubelet[2240]: E0430 00:21:21.559841 2240 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:21:21.760613 kubelet[2240]: I0430 00:21:21.760404 2240 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Apr 30 00:21:21.760780 kubelet[2240]: E0430 00:21:21.760745 2240 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Apr 30 00:21:22.520868 containerd[1488]: time="2025-04-30T00:21:22.519880215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:21:22.520868 containerd[1488]: time="2025-04-30T00:21:22.520617850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:21:22.520868 containerd[1488]: time="2025-04-30T00:21:22.520630555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:21:22.520868 containerd[1488]: time="2025-04-30T00:21:22.520744690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:21:22.549584 systemd[1]: Started cri-containerd-cd794cb2bb95054a7e15ddc75ddda38598c8bad0bb272fb681f408b07c4765b1.scope - libcontainer container cd794cb2bb95054a7e15ddc75ddda38598c8bad0bb272fb681f408b07c4765b1. Apr 30 00:21:22.583068 containerd[1488]: time="2025-04-30T00:21:22.582999245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd794cb2bb95054a7e15ddc75ddda38598c8bad0bb272fb681f408b07c4765b1\"" Apr 30 00:21:22.583742 kubelet[2240]: E0430 00:21:22.583718 2240 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:22.585094 containerd[1488]: time="2025-04-30T00:21:22.585071092Z" level=info msg="CreateContainer within sandbox \"cd794cb2bb95054a7e15ddc75ddda38598c8bad0bb272fb681f408b07c4765b1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 00:21:24.496877 containerd[1488]: time="2025-04-30T00:21:24.496788273Z" level=info msg="CreateContainer within sandbox \"a16c09d3c4f0b5b0df2bfa8969c60e3b2e5dfc92a8c57962c0db1c7b22ed8cfb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"abb13ecd774d6817aab201604af3d3b87bf7ca5e6837f0afe9de383dfe159eb5\"" Apr 30 00:21:24.497680 containerd[1488]: time="2025-04-30T00:21:24.497629121Z" level=info msg="StartContainer for \"abb13ecd774d6817aab201604af3d3b87bf7ca5e6837f0afe9de383dfe159eb5\"" Apr 30 00:21:24.528583 systemd[1]: Started cri-containerd-abb13ecd774d6817aab201604af3d3b87bf7ca5e6837f0afe9de383dfe159eb5.scope - libcontainer container abb13ecd774d6817aab201604af3d3b87bf7ca5e6837f0afe9de383dfe159eb5. Apr 30 00:21:24.881862 containerd[1488]: time="2025-04-30T00:21:24.881805376Z" level=info msg="StartContainer for \"abb13ecd774d6817aab201604af3d3b87bf7ca5e6837f0afe9de383dfe159eb5\" returns successfully" Apr 30 00:21:24.882022 containerd[1488]: time="2025-04-30T00:21:24.881836986Z" level=info msg="CreateContainer within sandbox \"c20a4708fc8e95fae81e9b122e0178019d6564e112127d9a2d175ffea3e6db81\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ed7ffb39365df1425b1c6f4e6e7702f6f666aad4c4bde55234a9bdb832463406\"" Apr 30 00:21:24.883686 containerd[1488]: time="2025-04-30T00:21:24.882614333Z" level=info msg="StartContainer for \"ed7ffb39365df1425b1c6f4e6e7702f6f666aad4c4bde55234a9bdb832463406\"" Apr 30 00:21:24.923975 kubelet[2240]: E0430 00:21:24.923899 2240 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:24.952664 systemd[1]: Started cri-containerd-ed7ffb39365df1425b1c6f4e6e7702f6f666aad4c4bde55234a9bdb832463406.scope - libcontainer container ed7ffb39365df1425b1c6f4e6e7702f6f666aad4c4bde55234a9bdb832463406. Apr 30 00:21:24.966775 containerd[1488]: time="2025-04-30T00:21:24.966725544Z" level=info msg="CreateContainer within sandbox \"cd794cb2bb95054a7e15ddc75ddda38598c8bad0bb272fb681f408b07c4765b1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0376911e483f6376338e041eddd8226dba8f53fe15b780331f44c6a9f932a4fc\"" Apr 30 00:21:24.967316 containerd[1488]: time="2025-04-30T00:21:24.967276664Z" level=info msg="StartContainer for \"0376911e483f6376338e041eddd8226dba8f53fe15b780331f44c6a9f932a4fc\"" Apr 30 00:21:25.029924 systemd[1]: Started cri-containerd-0376911e483f6376338e041eddd8226dba8f53fe15b780331f44c6a9f932a4fc.scope - libcontainer container 0376911e483f6376338e041eddd8226dba8f53fe15b780331f44c6a9f932a4fc. Apr 30 00:21:25.325339 containerd[1488]: time="2025-04-30T00:21:25.325132854Z" level=info msg="StartContainer for \"0376911e483f6376338e041eddd8226dba8f53fe15b780331f44c6a9f932a4fc\" returns successfully" Apr 30 00:21:25.325339 containerd[1488]: time="2025-04-30T00:21:25.325196995Z" level=info msg="StartContainer for \"ed7ffb39365df1425b1c6f4e6e7702f6f666aad4c4bde55234a9bdb832463406\" returns successfully" Apr 30 00:21:25.444746 kubelet[2240]: E0430 00:21:25.444674 2240 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 30 00:21:25.805687 kubelet[2240]: E0430 00:21:25.805651 2240 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 30 00:21:25.927759 kubelet[2240]: E0430 00:21:25.927716 2240 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:25.928767 kubelet[2240]: E0430 00:21:25.928751 2240 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:25.928890 kubelet[2240]: E0430 00:21:25.928867 2240 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:26.011163 kubelet[2240]: E0430 00:21:26.010944 2240 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183af0b6a86e2bf9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-04-30 00:21:12.859995129 +0000 UTC m=+0.954894609,LastTimestamp:2025-04-30 00:21:12.859995129 +0000 UTC m=+0.954894609,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 30 00:21:26.700047 kubelet[2240]: E0430 00:21:26.699887 2240 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183af0b6a92fbe8f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-04-30 00:21:12.872681103 +0000 UTC m=+0.967580594,LastTimestamp:2025-04-30 00:21:12.872681103 +0000 UTC m=+0.967580594,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 30 00:21:26.930456 kubelet[2240]: E0430 00:21:26.930402 2240 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:26.930870 kubelet[2240]: E0430 00:21:26.930469 2240 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:26.930870 kubelet[2240]: E0430 00:21:26.930618 2240 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:27.527836 kubelet[2240]: E0430 00:21:27.527787 2240 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Apr 30 00:21:28.162357 kubelet[2240]: I0430 00:21:28.162322 2240 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Apr 30 00:21:28.367361 kubelet[2240]: I0430 00:21:28.367300 2240 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Apr 30 00:21:28.367361 kubelet[2240]: E0430 00:21:28.367360 2240 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 30 00:21:28.631841 kubelet[2240]: E0430 00:21:28.631757 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:28.732805 kubelet[2240]: E0430 00:21:28.732757 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:28.832928 kubelet[2240]: E0430 00:21:28.832882 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:28.933558 kubelet[2240]: E0430 00:21:28.933407 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:29.034557 kubelet[2240]: E0430 00:21:29.034501 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:29.134736 kubelet[2240]: E0430 00:21:29.134668 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:29.235486 kubelet[2240]: E0430 00:21:29.235329 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:29.336420 kubelet[2240]: E0430 00:21:29.336337 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:29.436520 kubelet[2240]: E0430 00:21:29.436457 2240 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:29.864590 kubelet[2240]: I0430 00:21:29.864519 2240 apiserver.go:52] "Watching apiserver" Apr 30 00:21:29.966224 kubelet[2240]: I0430 00:21:29.966155 2240 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 30 00:21:33.522309 systemd[1]: Reloading requested from client PID 2536 ('systemctl') (unit session-9.scope)... Apr 30 00:21:33.522328 systemd[1]: Reloading... Apr 30 00:21:33.695521 zram_generator::config[2578]: No configuration found. Apr 30 00:21:33.928883 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:21:34.026209 systemd[1]: Reloading finished in 503 ms. Apr 30 00:21:34.082198 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:21:34.105049 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:21:34.105354 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:21:34.105415 systemd[1]: kubelet.service: Consumed 1.797s CPU time, 116.6M memory peak, 0B memory swap peak. Apr 30 00:21:34.114795 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:21:34.329741 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:21:34.335609 (kubelet)[2620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:21:34.374684 kubelet[2620]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:21:34.374684 kubelet[2620]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:21:34.374684 kubelet[2620]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:21:34.375204 kubelet[2620]: I0430 00:21:34.374744 2620 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:21:34.385483 kubelet[2620]: I0430 00:21:34.385413 2620 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Apr 30 00:21:34.385483 kubelet[2620]: I0430 00:21:34.385463 2620 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:21:34.385774 kubelet[2620]: I0430 00:21:34.385745 2620 server.go:929] "Client rotation is on, will bootstrap in background" Apr 30 00:21:34.387021 kubelet[2620]: I0430 00:21:34.386989 2620 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 00:21:34.389113 kubelet[2620]: I0430 00:21:34.389072 2620 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:21:34.391377 kubelet[2620]: E0430 00:21:34.391345 2620 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 00:21:34.391466 kubelet[2620]: I0430 00:21:34.391385 2620 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 00:21:34.397709 kubelet[2620]: I0430 00:21:34.397664 2620 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:21:34.397833 kubelet[2620]: I0430 00:21:34.397814 2620 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Apr 30 00:21:34.398021 kubelet[2620]: I0430 00:21:34.397977 2620 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:21:34.398199 kubelet[2620]: I0430 00:21:34.398011 2620 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 00:21:34.398199 kubelet[2620]: I0430 00:21:34.398199 2620 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:21:34.398338 kubelet[2620]: I0430 00:21:34.398212 2620 container_manager_linux.go:300] "Creating device plugin manager" Apr 30 00:21:34.398338 kubelet[2620]: I0430 00:21:34.398249 2620 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:21:34.398403 kubelet[2620]: I0430 00:21:34.398380 2620 kubelet.go:408] "Attempting to sync node with API server" Apr 30 00:21:34.398403 kubelet[2620]: I0430 00:21:34.398395 2620 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:21:34.398491 kubelet[2620]: I0430 00:21:34.398430 2620 kubelet.go:314] "Adding apiserver pod source" Apr 30 00:21:34.398491 kubelet[2620]: I0430 00:21:34.398472 2620 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:21:34.400842 kubelet[2620]: I0430 00:21:34.399177 2620 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 00:21:34.400842 kubelet[2620]: I0430 00:21:34.399610 2620 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:21:34.400842 kubelet[2620]: I0430 00:21:34.400060 2620 server.go:1269] "Started kubelet" Apr 30 00:21:34.400842 kubelet[2620]: I0430 00:21:34.400602 2620 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:21:34.401012 kubelet[2620]: I0430 00:21:34.400923 2620 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:21:34.401012 kubelet[2620]: I0430 00:21:34.400983 2620 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:21:34.402106 kubelet[2620]: I0430 00:21:34.402075 2620 server.go:460] "Adding debug handlers to kubelet server" Apr 30 00:21:34.404198 kubelet[2620]: E0430 00:21:34.404156 2620 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:21:34.404993 kubelet[2620]: I0430 00:21:34.404380 2620 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:21:34.406326 kubelet[2620]: I0430 00:21:34.405073 2620 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 00:21:34.406326 kubelet[2620]: I0430 00:21:34.405750 2620 volume_manager.go:289] "Starting Kubelet Volume Manager" Apr 30 00:21:34.406326 kubelet[2620]: I0430 00:21:34.405851 2620 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 30 00:21:34.406326 kubelet[2620]: I0430 00:21:34.406074 2620 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:21:34.406490 kubelet[2620]: E0430 00:21:34.406463 2620 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:21:34.407373 kubelet[2620]: I0430 00:21:34.407141 2620 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:21:34.407373 kubelet[2620]: I0430 00:21:34.407325 2620 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:21:34.411494 kubelet[2620]: I0430 00:21:34.408688 2620 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:21:34.424930 kubelet[2620]: I0430 00:21:34.424889 2620 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:21:34.426388 kubelet[2620]: I0430 00:21:34.426349 2620 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:21:34.426519 kubelet[2620]: I0430 00:21:34.426372 2620 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:21:34.426519 kubelet[2620]: I0430 00:21:34.426500 2620 kubelet.go:2321] "Starting kubelet main sync loop" Apr 30 00:21:34.426605 kubelet[2620]: E0430 00:21:34.426539 2620 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:21:34.445657 kubelet[2620]: I0430 00:21:34.445624 2620 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:21:34.445657 kubelet[2620]: I0430 00:21:34.445644 2620 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:21:34.445657 kubelet[2620]: I0430 00:21:34.445664 2620 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:21:34.445880 kubelet[2620]: I0430 00:21:34.445833 2620 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 00:21:34.445880 kubelet[2620]: I0430 00:21:34.445847 2620 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 00:21:34.445880 kubelet[2620]: I0430 00:21:34.445868 2620 policy_none.go:49] "None policy: Start" Apr 30 00:21:34.446359 kubelet[2620]: I0430 00:21:34.446332 2620 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:21:34.446359 kubelet[2620]: I0430 00:21:34.446355 2620 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:21:34.446526 kubelet[2620]: I0430 00:21:34.446507 2620 state_mem.go:75] "Updated machine memory state" Apr 30 00:21:34.450404 kubelet[2620]: I0430 00:21:34.450386 2620 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:21:34.450599 kubelet[2620]: I0430 00:21:34.450578 2620 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 00:21:34.450638 kubelet[2620]: I0430 00:21:34.450596 2620 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:21:34.450884 kubelet[2620]: I0430 00:21:34.450865 2620 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:21:34.556050 kubelet[2620]: I0430 00:21:34.556011 2620 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Apr 30 00:21:34.607864 kubelet[2620]: I0430 00:21:34.607719 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:21:34.607864 kubelet[2620]: I0430 00:21:34.607759 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:21:34.607864 kubelet[2620]: I0430 00:21:34.607785 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:21:34.607864 kubelet[2620]: I0430 00:21:34.607804 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:21:34.607864 kubelet[2620]: I0430 00:21:34.607837 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" Apr 30 00:21:34.608101 kubelet[2620]: I0430 00:21:34.607854 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:21:34.608101 kubelet[2620]: I0430 00:21:34.607872 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b76869aa4bfe2ca3bd2a46e34e222ef-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3b76869aa4bfe2ca3bd2a46e34e222ef\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:21:34.608101 kubelet[2620]: I0430 00:21:34.607890 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b76869aa4bfe2ca3bd2a46e34e222ef-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3b76869aa4bfe2ca3bd2a46e34e222ef\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:21:34.608101 kubelet[2620]: I0430 00:21:34.607906 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b76869aa4bfe2ca3bd2a46e34e222ef-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3b76869aa4bfe2ca3bd2a46e34e222ef\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:21:34.835566 kubelet[2620]: E0430 00:21:34.835522 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:34.835714 kubelet[2620]: E0430 00:21:34.835631 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:34.835714 kubelet[2620]: E0430 00:21:34.835700 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:34.876701 kubelet[2620]: I0430 00:21:34.876360 2620 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Apr 30 00:21:34.876701 kubelet[2620]: I0430 00:21:34.876475 2620 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Apr 30 00:21:35.399659 kubelet[2620]: I0430 00:21:35.399622 2620 apiserver.go:52] "Watching apiserver" Apr 30 00:21:35.406681 kubelet[2620]: I0430 00:21:35.406656 2620 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 30 00:21:35.433991 kubelet[2620]: E0430 00:21:35.433970 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:35.434185 kubelet[2620]: E0430 00:21:35.434146 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:35.434185 kubelet[2620]: E0430 00:21:35.434152 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:35.838440 kubelet[2620]: I0430 00:21:35.838359 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.838320513 podStartE2EDuration="1.838320513s" podCreationTimestamp="2025-04-30 00:21:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:21:35.838219983 +0000 UTC m=+1.497511088" watchObservedRunningTime="2025-04-30 00:21:35.838320513 +0000 UTC m=+1.497611618" Apr 30 00:21:36.344244 kubelet[2620]: I0430 00:21:36.344178 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.344160541 podStartE2EDuration="2.344160541s" podCreationTimestamp="2025-04-30 00:21:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:21:36.343985642 +0000 UTC m=+2.003276747" watchObservedRunningTime="2025-04-30 00:21:36.344160541 +0000 UTC m=+2.003451646" Apr 30 00:21:36.435163 kubelet[2620]: E0430 00:21:36.435119 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:36.435163 kubelet[2620]: E0430 00:21:36.435153 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:36.440068 kubelet[2620]: I0430 00:21:36.440006 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.4399883669999998 podStartE2EDuration="2.439988367s" podCreationTimestamp="2025-04-30 00:21:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:21:36.43975118 +0000 UTC m=+2.099042295" watchObservedRunningTime="2025-04-30 00:21:36.439988367 +0000 UTC m=+2.099279472" Apr 30 00:21:37.359064 sudo[2656]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 00:21:37.359431 sudo[2656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 00:21:37.436390 kubelet[2620]: E0430 00:21:37.436238 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:37.436390 kubelet[2620]: E0430 00:21:37.436345 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:37.925354 sudo[2656]: pam_unix(sudo:session): session closed for user root Apr 30 00:21:38.437741 kubelet[2620]: E0430 00:21:38.437694 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:39.386641 kubelet[2620]: I0430 00:21:39.386595 2620 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 00:21:39.386955 containerd[1488]: time="2025-04-30T00:21:39.386922313Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 00:21:39.387332 kubelet[2620]: I0430 00:21:39.387151 2620 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 00:21:39.439616 kubelet[2620]: E0430 00:21:39.439555 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:40.017759 sudo[1683]: pam_unix(sudo:session): session closed for user root Apr 30 00:21:40.019288 sshd[1682]: Connection closed by 10.0.0.1 port 46466 Apr 30 00:21:40.019956 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Apr 30 00:21:40.024854 systemd[1]: sshd@8-10.0.0.119:22-10.0.0.1:46466.service: Deactivated successfully. Apr 30 00:21:40.028003 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 00:21:40.028319 systemd[1]: session-9.scope: Consumed 4.746s CPU time, 152.3M memory peak, 0B memory swap peak. Apr 30 00:21:40.029149 systemd-logind[1472]: Session 9 logged out. Waiting for processes to exit. Apr 30 00:21:40.030134 systemd-logind[1472]: Removed session 9. Apr 30 00:21:40.738646 systemd[1]: Created slice kubepods-besteffort-pod1582af76_36f2_486f_a0df_23b0a00e968d.slice - libcontainer container kubepods-besteffort-pod1582af76_36f2_486f_a0df_23b0a00e968d.slice. Apr 30 00:21:40.820410 systemd[1]: Created slice kubepods-burstable-pod8f539ef2_2fc8_413c_b20a_6d4c2cd67360.slice - libcontainer container kubepods-burstable-pod8f539ef2_2fc8_413c_b20a_6d4c2cd67360.slice. Apr 30 00:21:40.869963 kubelet[2620]: I0430 00:21:40.869926 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1582af76-36f2-486f-a0df-23b0a00e968d-lib-modules\") pod \"kube-proxy-dgjpl\" (UID: \"1582af76-36f2-486f-a0df-23b0a00e968d\") " pod="kube-system/kube-proxy-dgjpl" Apr 30 00:21:40.869963 kubelet[2620]: I0430 00:21:40.869968 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6gpd\" (UniqueName: \"kubernetes.io/projected/1582af76-36f2-486f-a0df-23b0a00e968d-kube-api-access-w6gpd\") pod \"kube-proxy-dgjpl\" (UID: \"1582af76-36f2-486f-a0df-23b0a00e968d\") " pod="kube-system/kube-proxy-dgjpl" Apr 30 00:21:40.870347 kubelet[2620]: I0430 00:21:40.869996 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1582af76-36f2-486f-a0df-23b0a00e968d-kube-proxy\") pod \"kube-proxy-dgjpl\" (UID: \"1582af76-36f2-486f-a0df-23b0a00e968d\") " pod="kube-system/kube-proxy-dgjpl" Apr 30 00:21:40.870347 kubelet[2620]: I0430 00:21:40.870017 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1582af76-36f2-486f-a0df-23b0a00e968d-xtables-lock\") pod \"kube-proxy-dgjpl\" (UID: \"1582af76-36f2-486f-a0df-23b0a00e968d\") " pod="kube-system/kube-proxy-dgjpl" Apr 30 00:21:40.970205 kubelet[2620]: I0430 00:21:40.970141 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-bpf-maps\") pod \"cilium-4tjz9\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " pod="kube-system/cilium-4tjz9" Apr 30 00:21:40.970205 kubelet[2620]: I0430 00:21:40.970177 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-cilium-config-path\") pod \"cilium-4tjz9\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " pod="kube-system/cilium-4tjz9" Apr 30 00:21:40.970205 kubelet[2620]: I0430 00:21:40.970197 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4x2h\" (UniqueName: \"kubernetes.io/projected/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-kube-api-access-h4x2h\") pod \"cilium-4tjz9\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " pod="kube-system/cilium-4tjz9" Apr 30 00:21:40.970205 kubelet[2620]: I0430 00:21:40.970215 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-lib-modules\") pod \"cilium-4tjz9\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " pod="kube-system/cilium-4tjz9" Apr 30 00:21:40.970519 kubelet[2620]: I0430 00:21:40.970292 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-host-proc-sys-kernel\") pod \"cilium-4tjz9\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " pod="kube-system/cilium-4tjz9" Apr 30 00:21:40.970519 kubelet[2620]: I0430 00:21:40.970329 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-hostproc\") pod \"cilium-4tjz9\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " pod="kube-system/cilium-4tjz9" Apr 30 00:21:40.970519 kubelet[2620]: I0430 00:21:40.970378 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-cilium-cgroup\") pod \"cilium-4tjz9\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " pod="kube-system/cilium-4tjz9" Apr 30 00:21:40.970519 kubelet[2620]: I0430 00:21:40.970402 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-clustermesh-secrets\") pod \"cilium-4tjz9\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " pod="kube-system/cilium-4tjz9" Apr 30 00:21:40.970519 kubelet[2620]: I0430 00:21:40.970425 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-etc-cni-netd\") pod \"cilium-4tjz9\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " pod="kube-system/cilium-4tjz9" Apr 30 00:21:40.970519 kubelet[2620]: I0430 00:21:40.970493 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-xtables-lock\") pod \"cilium-4tjz9\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " pod="kube-system/cilium-4tjz9" Apr 30 00:21:40.970846 kubelet[2620]: I0430 00:21:40.970508 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-host-proc-sys-net\") pod \"cilium-4tjz9\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " pod="kube-system/cilium-4tjz9" Apr 30 00:21:40.970846 kubelet[2620]: I0430 00:21:40.970521 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-hubble-tls\") pod \"cilium-4tjz9\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " pod="kube-system/cilium-4tjz9" Apr 30 00:21:40.970846 kubelet[2620]: I0430 00:21:40.970538 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-cilium-run\") pod \"cilium-4tjz9\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " pod="kube-system/cilium-4tjz9" Apr 30 00:21:40.970846 kubelet[2620]: I0430 00:21:40.970551 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-cni-path\") pod \"cilium-4tjz9\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " pod="kube-system/cilium-4tjz9" Apr 30 00:21:41.957574 kubelet[2620]: E0430 00:21:41.957509 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:41.958233 containerd[1488]: time="2025-04-30T00:21:41.958182074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dgjpl,Uid:1582af76-36f2-486f-a0df-23b0a00e968d,Namespace:kube-system,Attempt:0,}" Apr 30 00:21:42.023568 kubelet[2620]: E0430 00:21:42.023517 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:42.024099 containerd[1488]: time="2025-04-30T00:21:42.024054535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4tjz9,Uid:8f539ef2-2fc8-413c-b20a-6d4c2cd67360,Namespace:kube-system,Attempt:0,}" Apr 30 00:21:42.075829 systemd[1]: Created slice kubepods-besteffort-pod04ec286c_b961_4235_8aa7_fa7326f986a0.slice - libcontainer container kubepods-besteffort-pod04ec286c_b961_4235_8aa7_fa7326f986a0.slice. Apr 30 00:21:42.176057 kubelet[2620]: I0430 00:21:42.175973 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kslq\" (UniqueName: \"kubernetes.io/projected/04ec286c-b961-4235-8aa7-fa7326f986a0-kube-api-access-5kslq\") pod \"cilium-operator-5d85765b45-96w6m\" (UID: \"04ec286c-b961-4235-8aa7-fa7326f986a0\") " pod="kube-system/cilium-operator-5d85765b45-96w6m" Apr 30 00:21:42.176057 kubelet[2620]: I0430 00:21:42.176033 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04ec286c-b961-4235-8aa7-fa7326f986a0-cilium-config-path\") pod \"cilium-operator-5d85765b45-96w6m\" (UID: \"04ec286c-b961-4235-8aa7-fa7326f986a0\") " pod="kube-system/cilium-operator-5d85765b45-96w6m" Apr 30 00:21:42.378911 kubelet[2620]: E0430 00:21:42.378858 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:42.379432 containerd[1488]: time="2025-04-30T00:21:42.379384455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-96w6m,Uid:04ec286c-b961-4235-8aa7-fa7326f986a0,Namespace:kube-system,Attempt:0,}" Apr 30 00:21:42.723552 containerd[1488]: time="2025-04-30T00:21:42.723123601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:21:42.723552 containerd[1488]: time="2025-04-30T00:21:42.723193672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:21:42.723552 containerd[1488]: time="2025-04-30T00:21:42.723211976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:21:42.723552 containerd[1488]: time="2025-04-30T00:21:42.723297167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:21:42.752673 systemd[1]: Started cri-containerd-9b4d7f2b98ed71088d295c964e685fff35cee50136e470eb799f6717102432be.scope - libcontainer container 9b4d7f2b98ed71088d295c964e685fff35cee50136e470eb799f6717102432be. Apr 30 00:21:42.754910 containerd[1488]: time="2025-04-30T00:21:42.754485134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:21:42.754910 containerd[1488]: time="2025-04-30T00:21:42.754561478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:21:42.754910 containerd[1488]: time="2025-04-30T00:21:42.754576806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:21:42.754910 containerd[1488]: time="2025-04-30T00:21:42.754661386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:21:42.779801 containerd[1488]: time="2025-04-30T00:21:42.779686010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:21:42.779936 containerd[1488]: time="2025-04-30T00:21:42.779774376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:21:42.779936 containerd[1488]: time="2025-04-30T00:21:42.779795255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:21:42.779936 containerd[1488]: time="2025-04-30T00:21:42.779902666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:21:42.784634 systemd[1]: Started cri-containerd-bd9934ab56694892871b9962eb511b76ea58daeac830df77996a8bbf643cc01c.scope - libcontainer container bd9934ab56694892871b9962eb511b76ea58daeac830df77996a8bbf643cc01c. Apr 30 00:21:42.799764 systemd[1]: Started cri-containerd-546f9d701111690bacd4d5fd94679dddc9f602ab6dcfc907b5409a865bc4d4f4.scope - libcontainer container 546f9d701111690bacd4d5fd94679dddc9f602ab6dcfc907b5409a865bc4d4f4. Apr 30 00:21:42.808970 containerd[1488]: time="2025-04-30T00:21:42.808478505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dgjpl,Uid:1582af76-36f2-486f-a0df-23b0a00e968d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b4d7f2b98ed71088d295c964e685fff35cee50136e470eb799f6717102432be\"" Apr 30 00:21:42.810700 kubelet[2620]: E0430 00:21:42.809634 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:42.813105 containerd[1488]: time="2025-04-30T00:21:42.813060717Z" level=info msg="CreateContainer within sandbox \"9b4d7f2b98ed71088d295c964e685fff35cee50136e470eb799f6717102432be\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 00:21:42.825120 containerd[1488]: time="2025-04-30T00:21:42.825066512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4tjz9,Uid:8f539ef2-2fc8-413c-b20a-6d4c2cd67360,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd9934ab56694892871b9962eb511b76ea58daeac830df77996a8bbf643cc01c\"" Apr 30 00:21:42.826059 kubelet[2620]: E0430 00:21:42.826029 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:42.828795 containerd[1488]: time="2025-04-30T00:21:42.828427148Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 00:21:42.850776 containerd[1488]: time="2025-04-30T00:21:42.850719176Z" level=info msg="CreateContainer within sandbox \"9b4d7f2b98ed71088d295c964e685fff35cee50136e470eb799f6717102432be\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"83b1818cd4c4292da1f3815e287fe156a2cc5839c8a4ae9c4524e9f8302a5347\"" Apr 30 00:21:42.851395 containerd[1488]: time="2025-04-30T00:21:42.851367034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-96w6m,Uid:04ec286c-b961-4235-8aa7-fa7326f986a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"546f9d701111690bacd4d5fd94679dddc9f602ab6dcfc907b5409a865bc4d4f4\"" Apr 30 00:21:42.852005 containerd[1488]: time="2025-04-30T00:21:42.851813172Z" level=info msg="StartContainer for \"83b1818cd4c4292da1f3815e287fe156a2cc5839c8a4ae9c4524e9f8302a5347\"" Apr 30 00:21:42.852994 kubelet[2620]: E0430 00:21:42.852945 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:42.886640 systemd[1]: Started cri-containerd-83b1818cd4c4292da1f3815e287fe156a2cc5839c8a4ae9c4524e9f8302a5347.scope - libcontainer container 83b1818cd4c4292da1f3815e287fe156a2cc5839c8a4ae9c4524e9f8302a5347. Apr 30 00:21:42.927067 containerd[1488]: time="2025-04-30T00:21:42.927013500Z" level=info msg="StartContainer for \"83b1818cd4c4292da1f3815e287fe156a2cc5839c8a4ae9c4524e9f8302a5347\" returns successfully" Apr 30 00:21:43.450248 kubelet[2620]: E0430 00:21:43.450209 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:43.479348 kubelet[2620]: I0430 00:21:43.479271 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dgjpl" podStartSLOduration=3.479251234 podStartE2EDuration="3.479251234s" podCreationTimestamp="2025-04-30 00:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:21:43.479007236 +0000 UTC m=+9.138298341" watchObservedRunningTime="2025-04-30 00:21:43.479251234 +0000 UTC m=+9.138542339" Apr 30 00:21:44.661677 kubelet[2620]: E0430 00:21:44.661607 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:45.466468 kubelet[2620]: E0430 00:21:45.466413 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:21:52.904248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2530648462.mount: Deactivated successfully. Apr 30 00:22:00.559932 containerd[1488]: time="2025-04-30T00:22:00.559762052Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:22:00.566644 containerd[1488]: time="2025-04-30T00:22:00.566565971Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 30 00:22:00.574368 containerd[1488]: time="2025-04-30T00:22:00.574321586Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:22:00.576180 containerd[1488]: time="2025-04-30T00:22:00.576123608Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 17.747374235s" Apr 30 00:22:00.576180 containerd[1488]: time="2025-04-30T00:22:00.576172339Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 30 00:22:00.577606 containerd[1488]: time="2025-04-30T00:22:00.577564172Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 00:22:00.590333 containerd[1488]: time="2025-04-30T00:22:00.590279684Z" level=info msg="CreateContainer within sandbox \"bd9934ab56694892871b9962eb511b76ea58daeac830df77996a8bbf643cc01c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 00:22:00.627291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount734431438.mount: Deactivated successfully. Apr 30 00:22:00.631328 containerd[1488]: time="2025-04-30T00:22:00.631271311Z" level=info msg="CreateContainer within sandbox \"bd9934ab56694892871b9962eb511b76ea58daeac830df77996a8bbf643cc01c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ddc432377fad2dfb534c159234ee40bca85f2e4e44f2be58fa55e599fa8c1a57\"" Apr 30 00:22:00.632791 containerd[1488]: time="2025-04-30T00:22:00.632732072Z" level=info msg="StartContainer for \"ddc432377fad2dfb534c159234ee40bca85f2e4e44f2be58fa55e599fa8c1a57\"" Apr 30 00:22:00.664596 systemd[1]: Started cri-containerd-ddc432377fad2dfb534c159234ee40bca85f2e4e44f2be58fa55e599fa8c1a57.scope - libcontainer container ddc432377fad2dfb534c159234ee40bca85f2e4e44f2be58fa55e599fa8c1a57. Apr 30 00:22:00.712611 systemd[1]: cri-containerd-ddc432377fad2dfb534c159234ee40bca85f2e4e44f2be58fa55e599fa8c1a57.scope: Deactivated successfully. Apr 30 00:22:00.712837 containerd[1488]: time="2025-04-30T00:22:00.712749968Z" level=info msg="StartContainer for \"ddc432377fad2dfb534c159234ee40bca85f2e4e44f2be58fa55e599fa8c1a57\" returns successfully" Apr 30 00:22:01.497463 kubelet[2620]: E0430 00:22:01.497406 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:22:01.624438 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ddc432377fad2dfb534c159234ee40bca85f2e4e44f2be58fa55e599fa8c1a57-rootfs.mount: Deactivated successfully. Apr 30 00:22:02.243715 containerd[1488]: time="2025-04-30T00:22:02.243651037Z" level=info msg="shim disconnected" id=ddc432377fad2dfb534c159234ee40bca85f2e4e44f2be58fa55e599fa8c1a57 namespace=k8s.io Apr 30 00:22:02.243715 containerd[1488]: time="2025-04-30T00:22:02.243699108Z" level=warning msg="cleaning up after shim disconnected" id=ddc432377fad2dfb534c159234ee40bca85f2e4e44f2be58fa55e599fa8c1a57 namespace=k8s.io Apr 30 00:22:02.243715 containerd[1488]: time="2025-04-30T00:22:02.243709257Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:22:02.500341 kubelet[2620]: E0430 00:22:02.500202 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:22:02.501881 containerd[1488]: time="2025-04-30T00:22:02.501832383Z" level=info msg="CreateContainer within sandbox \"bd9934ab56694892871b9962eb511b76ea58daeac830df77996a8bbf643cc01c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 00:22:02.900352 containerd[1488]: time="2025-04-30T00:22:02.900280991Z" level=info msg="CreateContainer within sandbox \"bd9934ab56694892871b9962eb511b76ea58daeac830df77996a8bbf643cc01c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"352922f7af7af145b7d5b877d214e5588a33e0736378dcce337b0bdae6727272\"" Apr 30 00:22:02.900815 containerd[1488]: time="2025-04-30T00:22:02.900758496Z" level=info msg="StartContainer for \"352922f7af7af145b7d5b877d214e5588a33e0736378dcce337b0bdae6727272\"" Apr 30 00:22:02.935626 systemd[1]: Started cri-containerd-352922f7af7af145b7d5b877d214e5588a33e0736378dcce337b0bdae6727272.scope - libcontainer container 352922f7af7af145b7d5b877d214e5588a33e0736378dcce337b0bdae6727272. Apr 30 00:22:02.965884 containerd[1488]: time="2025-04-30T00:22:02.965828359Z" level=info msg="StartContainer for \"352922f7af7af145b7d5b877d214e5588a33e0736378dcce337b0bdae6727272\" returns successfully" Apr 30 00:22:02.978413 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:22:02.978744 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:22:02.978997 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:22:02.984855 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:22:02.985125 systemd[1]: cri-containerd-352922f7af7af145b7d5b877d214e5588a33e0736378dcce337b0bdae6727272.scope: Deactivated successfully. Apr 30 00:22:03.002697 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-352922f7af7af145b7d5b877d214e5588a33e0736378dcce337b0bdae6727272-rootfs.mount: Deactivated successfully. Apr 30 00:22:03.010222 containerd[1488]: time="2025-04-30T00:22:03.010160107Z" level=info msg="shim disconnected" id=352922f7af7af145b7d5b877d214e5588a33e0736378dcce337b0bdae6727272 namespace=k8s.io Apr 30 00:22:03.010222 containerd[1488]: time="2025-04-30T00:22:03.010220610Z" level=warning msg="cleaning up after shim disconnected" id=352922f7af7af145b7d5b877d214e5588a33e0736378dcce337b0bdae6727272 namespace=k8s.io Apr 30 00:22:03.010403 containerd[1488]: time="2025-04-30T00:22:03.010230910Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:22:03.018153 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:22:03.503760 kubelet[2620]: E0430 00:22:03.503719 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:22:03.505333 containerd[1488]: time="2025-04-30T00:22:03.505296653Z" level=info msg="CreateContainer within sandbox \"bd9934ab56694892871b9962eb511b76ea58daeac830df77996a8bbf643cc01c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 00:22:03.543143 containerd[1488]: time="2025-04-30T00:22:03.543071785Z" level=info msg="CreateContainer within sandbox \"bd9934ab56694892871b9962eb511b76ea58daeac830df77996a8bbf643cc01c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"de44c4428149a533ca15c6d66c3ea24e5babf7e520923c4f33e35162143cf3c5\"" Apr 30 00:22:03.543735 containerd[1488]: time="2025-04-30T00:22:03.543693101Z" level=info msg="StartContainer for \"de44c4428149a533ca15c6d66c3ea24e5babf7e520923c4f33e35162143cf3c5\"" Apr 30 00:22:03.569798 systemd[1]: Started cri-containerd-de44c4428149a533ca15c6d66c3ea24e5babf7e520923c4f33e35162143cf3c5.scope - libcontainer container de44c4428149a533ca15c6d66c3ea24e5babf7e520923c4f33e35162143cf3c5. Apr 30 00:22:03.603260 systemd[1]: cri-containerd-de44c4428149a533ca15c6d66c3ea24e5babf7e520923c4f33e35162143cf3c5.scope: Deactivated successfully. Apr 30 00:22:03.609485 containerd[1488]: time="2025-04-30T00:22:03.609329362Z" level=info msg="StartContainer for \"de44c4428149a533ca15c6d66c3ea24e5babf7e520923c4f33e35162143cf3c5\" returns successfully" Apr 30 00:22:03.658139 containerd[1488]: time="2025-04-30T00:22:03.658058298Z" level=info msg="shim disconnected" id=de44c4428149a533ca15c6d66c3ea24e5babf7e520923c4f33e35162143cf3c5 namespace=k8s.io Apr 30 00:22:03.658139 containerd[1488]: time="2025-04-30T00:22:03.658121177Z" level=warning msg="cleaning up after shim disconnected" id=de44c4428149a533ca15c6d66c3ea24e5babf7e520923c4f33e35162143cf3c5 namespace=k8s.io Apr 30 00:22:03.658139 containerd[1488]: time="2025-04-30T00:22:03.658129913Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:22:04.511276 kubelet[2620]: E0430 00:22:04.510678 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:22:04.516822 containerd[1488]: time="2025-04-30T00:22:04.514397739Z" level=info msg="CreateContainer within sandbox \"bd9934ab56694892871b9962eb511b76ea58daeac830df77996a8bbf643cc01c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 00:22:04.804592 containerd[1488]: time="2025-04-30T00:22:04.804488383Z" level=info msg="CreateContainer within sandbox \"bd9934ab56694892871b9962eb511b76ea58daeac830df77996a8bbf643cc01c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bad50a2c10d6a3b3d1546b9f6b0500c4105cc7c2ac9af940eaf1b937b0850571\"" Apr 30 00:22:04.815577 containerd[1488]: time="2025-04-30T00:22:04.808104408Z" level=info msg="StartContainer for \"bad50a2c10d6a3b3d1546b9f6b0500c4105cc7c2ac9af940eaf1b937b0850571\"" Apr 30 00:22:04.887847 systemd[1]: Started cri-containerd-bad50a2c10d6a3b3d1546b9f6b0500c4105cc7c2ac9af940eaf1b937b0850571.scope - libcontainer container bad50a2c10d6a3b3d1546b9f6b0500c4105cc7c2ac9af940eaf1b937b0850571. Apr 30 00:22:04.996050 systemd[1]: cri-containerd-bad50a2c10d6a3b3d1546b9f6b0500c4105cc7c2ac9af940eaf1b937b0850571.scope: Deactivated successfully. Apr 30 00:22:05.004176 containerd[1488]: time="2025-04-30T00:22:05.003771880Z" level=info msg="StartContainer for \"bad50a2c10d6a3b3d1546b9f6b0500c4105cc7c2ac9af940eaf1b937b0850571\" returns successfully" Apr 30 00:22:05.076670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bad50a2c10d6a3b3d1546b9f6b0500c4105cc7c2ac9af940eaf1b937b0850571-rootfs.mount: Deactivated successfully. Apr 30 00:22:05.203847 containerd[1488]: time="2025-04-30T00:22:05.203770997Z" level=info msg="shim disconnected" id=bad50a2c10d6a3b3d1546b9f6b0500c4105cc7c2ac9af940eaf1b937b0850571 namespace=k8s.io Apr 30 00:22:05.204363 containerd[1488]: time="2025-04-30T00:22:05.204145499Z" level=warning msg="cleaning up after shim disconnected" id=bad50a2c10d6a3b3d1546b9f6b0500c4105cc7c2ac9af940eaf1b937b0850571 namespace=k8s.io Apr 30 00:22:05.204363 containerd[1488]: time="2025-04-30T00:22:05.204163854Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:22:05.531091 kubelet[2620]: E0430 00:22:05.529163 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:22:05.544463 containerd[1488]: time="2025-04-30T00:22:05.539999741Z" level=info msg="CreateContainer within sandbox \"bd9934ab56694892871b9962eb511b76ea58daeac830df77996a8bbf643cc01c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 00:22:06.637019 containerd[1488]: time="2025-04-30T00:22:06.636935648Z" level=info msg="CreateContainer within sandbox \"bd9934ab56694892871b9962eb511b76ea58daeac830df77996a8bbf643cc01c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"30c1d4af5d8a1a557f714e17245362fe1a2815e6af33c1188f8cf83b8672d60d\"" Apr 30 00:22:06.637674 containerd[1488]: time="2025-04-30T00:22:06.637634198Z" level=info msg="StartContainer for \"30c1d4af5d8a1a557f714e17245362fe1a2815e6af33c1188f8cf83b8672d60d\"" Apr 30 00:22:06.667573 systemd[1]: Started cri-containerd-30c1d4af5d8a1a557f714e17245362fe1a2815e6af33c1188f8cf83b8672d60d.scope - libcontainer container 30c1d4af5d8a1a557f714e17245362fe1a2815e6af33c1188f8cf83b8672d60d. Apr 30 00:22:06.752224 containerd[1488]: time="2025-04-30T00:22:06.752158994Z" level=info msg="StartContainer for \"30c1d4af5d8a1a557f714e17245362fe1a2815e6af33c1188f8cf83b8672d60d\" returns successfully" Apr 30 00:22:07.503179 kubelet[2620]: I0430 00:22:07.502405 2620 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Apr 30 00:22:07.535907 kubelet[2620]: E0430 00:22:07.535859 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:22:07.811882 containerd[1488]: time="2025-04-30T00:22:07.811715840Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:22:07.820706 containerd[1488]: time="2025-04-30T00:22:07.820621423Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 30 00:22:07.826652 kubelet[2620]: I0430 00:22:07.826559 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4tjz9" podStartSLOduration=10.076401736 podStartE2EDuration="27.826537526s" podCreationTimestamp="2025-04-30 00:21:40 +0000 UTC" firstStartedPulling="2025-04-30 00:21:42.827239266 +0000 UTC m=+8.486530371" lastFinishedPulling="2025-04-30 00:22:00.577375056 +0000 UTC m=+26.236666161" observedRunningTime="2025-04-30 00:22:07.616555238 +0000 UTC m=+33.275846353" watchObservedRunningTime="2025-04-30 00:22:07.826537526 +0000 UTC m=+33.485828641" Apr 30 00:22:07.828832 containerd[1488]: time="2025-04-30T00:22:07.828764460Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:22:07.830806 containerd[1488]: time="2025-04-30T00:22:07.830333805Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 7.252724649s" Apr 30 00:22:07.830806 containerd[1488]: time="2025-04-30T00:22:07.830367529Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 30 00:22:07.833212 containerd[1488]: time="2025-04-30T00:22:07.833151272Z" level=info msg="CreateContainer within sandbox \"546f9d701111690bacd4d5fd94679dddc9f602ab6dcfc907b5409a865bc4d4f4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 00:22:07.838513 systemd[1]: Created slice kubepods-burstable-pode8a76bb5_2ff2_46cf_a2b4_36408dd69ed8.slice - libcontainer container kubepods-burstable-pode8a76bb5_2ff2_46cf_a2b4_36408dd69ed8.slice. Apr 30 00:22:07.853668 systemd[1]: Created slice kubepods-burstable-podf6b2a82c_0a35_4826_a0f5_6efd05590a2f.slice - libcontainer container kubepods-burstable-podf6b2a82c_0a35_4826_a0f5_6efd05590a2f.slice. Apr 30 00:22:07.941686 kubelet[2620]: I0430 00:22:07.941632 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24bwm\" (UniqueName: \"kubernetes.io/projected/e8a76bb5-2ff2-46cf-a2b4-36408dd69ed8-kube-api-access-24bwm\") pod \"coredns-6f6b679f8f-rzkrc\" (UID: \"e8a76bb5-2ff2-46cf-a2b4-36408dd69ed8\") " pod="kube-system/coredns-6f6b679f8f-rzkrc" Apr 30 00:22:07.941686 kubelet[2620]: I0430 00:22:07.941684 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8a76bb5-2ff2-46cf-a2b4-36408dd69ed8-config-volume\") pod \"coredns-6f6b679f8f-rzkrc\" (UID: \"e8a76bb5-2ff2-46cf-a2b4-36408dd69ed8\") " pod="kube-system/coredns-6f6b679f8f-rzkrc" Apr 30 00:22:07.984543 containerd[1488]: time="2025-04-30T00:22:07.984481502Z" level=info msg="CreateContainer within sandbox \"546f9d701111690bacd4d5fd94679dddc9f602ab6dcfc907b5409a865bc4d4f4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"080dda6afcc638b428269ebbf9b6ce7ac5ce4aba0d3b681510af8517fbfd8529\"" Apr 30 00:22:07.985135 containerd[1488]: time="2025-04-30T00:22:07.985097101Z" level=info msg="StartContainer for \"080dda6afcc638b428269ebbf9b6ce7ac5ce4aba0d3b681510af8517fbfd8529\"" Apr 30 00:22:08.024821 systemd[1]: Started cri-containerd-080dda6afcc638b428269ebbf9b6ce7ac5ce4aba0d3b681510af8517fbfd8529.scope - libcontainer container 080dda6afcc638b428269ebbf9b6ce7ac5ce4aba0d3b681510af8517fbfd8529. Apr 30 00:22:08.043108 kubelet[2620]: I0430 00:22:08.042263 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvszd\" (UniqueName: \"kubernetes.io/projected/f6b2a82c-0a35-4826-a0f5-6efd05590a2f-kube-api-access-zvszd\") pod \"coredns-6f6b679f8f-2zw22\" (UID: \"f6b2a82c-0a35-4826-a0f5-6efd05590a2f\") " pod="kube-system/coredns-6f6b679f8f-2zw22" Apr 30 00:22:08.043108 kubelet[2620]: I0430 00:22:08.042302 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6b2a82c-0a35-4826-a0f5-6efd05590a2f-config-volume\") pod \"coredns-6f6b679f8f-2zw22\" (UID: \"f6b2a82c-0a35-4826-a0f5-6efd05590a2f\") " pod="kube-system/coredns-6f6b679f8f-2zw22" Apr 30 00:22:08.092932 containerd[1488]: time="2025-04-30T00:22:08.092776847Z" level=info msg="StartContainer for \"080dda6afcc638b428269ebbf9b6ce7ac5ce4aba0d3b681510af8517fbfd8529\" returns successfully" Apr 30 00:22:08.141823 kubelet[2620]: E0430 00:22:08.141672 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:22:08.142472 containerd[1488]: time="2025-04-30T00:22:08.142248653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rzkrc,Uid:e8a76bb5-2ff2-46cf-a2b4-36408dd69ed8,Namespace:kube-system,Attempt:0,}" Apr 30 00:22:08.459262 kubelet[2620]: E0430 00:22:08.458696 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:22:08.460140 containerd[1488]: time="2025-04-30T00:22:08.459768522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2zw22,Uid:f6b2a82c-0a35-4826-a0f5-6efd05590a2f,Namespace:kube-system,Attempt:0,}" Apr 30 00:22:08.539002 kubelet[2620]: E0430 00:22:08.538965 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:22:08.539526 kubelet[2620]: E0430 00:22:08.539102 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:22:08.689456 kubelet[2620]: I0430 00:22:08.689359 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-96w6m" podStartSLOduration=2.711416221 podStartE2EDuration="27.689331181s" podCreationTimestamp="2025-04-30 00:21:41 +0000 UTC" firstStartedPulling="2025-04-30 00:21:42.853681644 +0000 UTC m=+8.512972749" lastFinishedPulling="2025-04-30 00:22:07.831596614 +0000 UTC m=+33.490887709" observedRunningTime="2025-04-30 00:22:08.688549273 +0000 UTC m=+34.347840378" watchObservedRunningTime="2025-04-30 00:22:08.689331181 +0000 UTC m=+34.348622286" Apr 30 00:22:09.541042 kubelet[2620]: E0430 00:22:09.540991 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:22:09.541628 kubelet[2620]: E0430 00:22:09.541156 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:22:10.543418 kubelet[2620]: E0430 00:22:10.543369 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:22:11.551566 systemd-networkd[1421]: cilium_host: Link UP Apr 30 00:22:11.551765 systemd-networkd[1421]: cilium_net: Link UP Apr 30 00:22:11.551963 systemd-networkd[1421]: cilium_net: Gained carrier Apr 30 00:22:11.552167 systemd-networkd[1421]: cilium_host: Gained carrier Apr 30 00:22:11.579560 systemd-networkd[1421]: cilium_host: Gained IPv6LL Apr 30 00:22:11.663653 systemd-networkd[1421]: cilium_vxlan: Link UP Apr 30 00:22:11.663899 systemd-networkd[1421]: cilium_vxlan: Gained carrier Apr 30 00:22:11.884504 kernel: NET: Registered PF_ALG protocol family Apr 30 00:22:12.010648 systemd-networkd[1421]: cilium_net: Gained IPv6LL Apr 30 00:22:12.605957 systemd-networkd[1421]: lxc_health: Link UP Apr 30 00:22:12.616729 systemd-networkd[1421]: lxc_health: Gained carrier Apr 30 00:22:12.817994 systemd-networkd[1421]: lxcadfcd9d58e21: Link UP Apr 30 00:22:12.824510 kernel: eth0: renamed from tmp712ec Apr 30 00:22:12.832813 systemd-networkd[1421]: lxcadfcd9d58e21: Gained carrier Apr 30 00:22:12.872894 systemd-networkd[1421]: lxccbaf0a384633: Link UP Apr 30 00:22:12.882475 kernel: eth0: renamed from tmp02f63 Apr 30 00:22:12.891305 systemd-networkd[1421]: lxccbaf0a384633: Gained carrier Apr 30 00:22:13.450629 systemd-networkd[1421]: cilium_vxlan: Gained IPv6LL Apr 30 00:22:14.025999 kubelet[2620]: E0430 00:22:14.025942 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:22:14.410725 systemd-networkd[1421]: lxc_health: Gained IPv6LL Apr 30 00:22:14.549407 kubelet[2620]: E0430 00:22:14.549237 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:22:14.731614 systemd-networkd[1421]: lxccbaf0a384633: Gained IPv6LL Apr 30 00:22:14.796589 systemd-networkd[1421]: lxcadfcd9d58e21: Gained IPv6LL Apr 30 00:22:15.649899 systemd[1]: Started sshd@9-10.0.0.119:22-10.0.0.1:33020.service - OpenSSH per-connection server daemon (10.0.0.1:33020). Apr 30 00:22:15.789520 sshd[3841]: Accepted publickey for core from 10.0.0.1 port 33020 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:22:15.791382 sshd-session[3841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:22:15.796534 systemd-logind[1472]: New session 10 of user core. Apr 30 00:22:15.804634 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 00:22:15.950763 sshd[3843]: Connection closed by 10.0.0.1 port 33020 Apr 30 00:22:15.952649 sshd-session[3841]: pam_unix(sshd:session): session closed for user core Apr 30 00:22:15.957784 systemd[1]: sshd@9-10.0.0.119:22-10.0.0.1:33020.service: Deactivated successfully. Apr 30 00:22:15.960765 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 00:22:15.961577 systemd-logind[1472]: Session 10 logged out. Waiting for processes to exit. Apr 30 00:22:15.962774 systemd-logind[1472]: Removed session 10. Apr 30 00:22:16.419939 containerd[1488]: time="2025-04-30T00:22:16.419846522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:22:16.420842 containerd[1488]: time="2025-04-30T00:22:16.420776685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:22:16.420919 containerd[1488]: time="2025-04-30T00:22:16.420886837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:22:16.421121 containerd[1488]: time="2025-04-30T00:22:16.421091151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:22:16.421807 containerd[1488]: time="2025-04-30T00:22:16.421727668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:22:16.421807 containerd[1488]: time="2025-04-30T00:22:16.421783977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:22:16.421878 containerd[1488]: time="2025-04-30T00:22:16.421800638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:22:16.421962 containerd[1488]: time="2025-04-30T00:22:16.421889660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:22:16.451636 systemd[1]: Started cri-containerd-02f63385c74549a61c0b3a68fe19af175734f0cd6eaa434d7abe3aa89b3fee93.scope - libcontainer container 02f63385c74549a61c0b3a68fe19af175734f0cd6eaa434d7abe3aa89b3fee93. Apr 30 00:22:16.456339 systemd[1]: Started cri-containerd-712ec96fd028129450a0bc814903e790075249b2e727a69a2613d444937ae79b.scope - libcontainer container 712ec96fd028129450a0bc814903e790075249b2e727a69a2613d444937ae79b. Apr 30 00:22:16.468526 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 00:22:16.470994 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 00:22:16.495035 containerd[1488]: time="2025-04-30T00:22:16.494988285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rzkrc,Uid:e8a76bb5-2ff2-46cf-a2b4-36408dd69ed8,Namespace:kube-system,Attempt:0,} returns sandbox id \"02f63385c74549a61c0b3a68fe19af175734f0cd6eaa434d7abe3aa89b3fee93\"" Apr 30 00:22:16.496732 kubelet[2620]: E0430 00:22:16.496702 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:22:16.499010 containerd[1488]: time="2025-04-30T00:22:16.498754274Z" level=info msg="CreateContainer within sandbox \"02f63385c74549a61c0b3a68fe19af175734f0cd6eaa434d7abe3aa89b3fee93\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:22:16.514091 containerd[1488]: time="2025-04-30T00:22:16.514010171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2zw22,Uid:f6b2a82c-0a35-4826-a0f5-6efd05590a2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"712ec96fd028129450a0bc814903e790075249b2e727a69a2613d444937ae79b\"" Apr 30 00:22:16.514972 kubelet[2620]: E0430 00:22:16.514931 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:22:16.516795 containerd[1488]: time="2025-04-30T00:22:16.516407001Z" level=info msg="CreateContainer within sandbox \"712ec96fd028129450a0bc814903e790075249b2e727a69a2613d444937ae79b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:22:16.746527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1637840912.mount: Deactivated successfully. Apr 30 00:22:17.016310 containerd[1488]: time="2025-04-30T00:22:17.016132277Z" level=info msg="CreateContainer within sandbox \"02f63385c74549a61c0b3a68fe19af175734f0cd6eaa434d7abe3aa89b3fee93\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"79c04291b3930fcb904ba18d71ef477bad2517c8cd823058c9ec93ec330e2725\"" Apr 30 00:22:17.016805 containerd[1488]: time="2025-04-30T00:22:17.016763552Z" level=info msg="StartContainer for \"79c04291b3930fcb904ba18d71ef477bad2517c8cd823058c9ec93ec330e2725\"" Apr 30 00:22:17.050712 systemd[1]: Started cri-containerd-79c04291b3930fcb904ba18d71ef477bad2517c8cd823058c9ec93ec330e2725.scope - libcontainer container 79c04291b3930fcb904ba18d71ef477bad2517c8cd823058c9ec93ec330e2725. Apr 30 00:22:17.254668 containerd[1488]: time="2025-04-30T00:22:17.254590422Z" level=info msg="CreateContainer within sandbox \"712ec96fd028129450a0bc814903e790075249b2e727a69a2613d444937ae79b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"71c5bdd2a8a93f8a5463a92ddd5e505e495afff27302583f6be7c2a7e34afc92\"" Apr 30 00:22:17.254827 containerd[1488]: time="2025-04-30T00:22:17.254626561Z" level=info msg="StartContainer for \"79c04291b3930fcb904ba18d71ef477bad2517c8cd823058c9ec93ec330e2725\" returns successfully" Apr 30 00:22:17.255185 containerd[1488]: time="2025-04-30T00:22:17.255149709Z" level=info msg="StartContainer for \"71c5bdd2a8a93f8a5463a92ddd5e505e495afff27302583f6be7c2a7e34afc92\"" Apr 30 00:22:17.284641 systemd[1]: Started cri-containerd-71c5bdd2a8a93f8a5463a92ddd5e505e495afff27302583f6be7c2a7e34afc92.scope - libcontainer container 71c5bdd2a8a93f8a5463a92ddd5e505e495afff27302583f6be7c2a7e34afc92. Apr 30 00:22:17.368778 containerd[1488]: time="2025-04-30T00:22:17.368720969Z" level=info msg="StartContainer for \"71c5bdd2a8a93f8a5463a92ddd5e505e495afff27302583f6be7c2a7e34afc92\" returns successfully" Apr 30 00:22:17.429344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2530744510.mount: Deactivated successfully. Apr 30 00:22:17.561609 kubelet[2620]: E0430 00:22:17.561474 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:22:17.563747 kubelet[2620]: E0430 00:22:17.563657 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:22:17.712725 kubelet[2620]: I0430 00:22:17.712648 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-2zw22" podStartSLOduration=36.712631565 podStartE2EDuration="36.712631565s" podCreationTimestamp="2025-04-30 00:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:22:17.712397414 +0000 UTC m=+43.371688519" watchObservedRunningTime="2025-04-30 00:22:17.712631565 +0000 UTC m=+43.371922670" Apr 30 00:22:17.806697 kubelet[2620]: I0430 00:22:17.806637 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-rzkrc" podStartSLOduration=36.806620124 podStartE2EDuration="36.806620124s" podCreationTimestamp="2025-04-30 00:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:22:17.805824101 +0000 UTC m=+43.465115216" watchObservedRunningTime="2025-04-30 00:22:17.806620124 +0000 UTC m=+43.465911229" Apr 30 00:22:18.565610 kubelet[2620]: E0430 00:22:18.565555 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:22:18.566181 kubelet[2620]: E0430 00:22:18.565826 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:22:19.567974 kubelet[2620]: E0430 00:22:19.567680 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:22:19.567974 kubelet[2620]: E0430 00:22:19.567680 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:22:20.963732 systemd[1]: Started sshd@10-10.0.0.119:22-10.0.0.1:37772.service - OpenSSH per-connection server daemon (10.0.0.1:37772). Apr 30 00:22:21.005727 sshd[4029]: Accepted publickey for core from 10.0.0.1 port 37772 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:22:21.007230 sshd-session[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:22:21.011409 systemd-logind[1472]: New session 11 of user core. Apr 30 00:22:21.025620 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 00:22:21.160632 sshd[4031]: Connection closed by 10.0.0.1 port 37772 Apr 30 00:22:21.160998 sshd-session[4029]: pam_unix(sshd:session): session closed for user core Apr 30 00:22:21.165620 systemd[1]: sshd@10-10.0.0.119:22-10.0.0.1:37772.service: Deactivated successfully. Apr 30 00:22:21.167982 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 00:22:21.168850 systemd-logind[1472]: Session 11 logged out. Waiting for processes to exit. Apr 30 00:22:21.169968 systemd-logind[1472]: Removed session 11. Apr 30 00:22:26.172264 systemd[1]: Started sshd@11-10.0.0.119:22-10.0.0.1:37774.service - OpenSSH per-connection server daemon (10.0.0.1:37774). Apr 30 00:22:26.208977 sshd[4046]: Accepted publickey for core from 10.0.0.1 port 37774 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:22:26.210594 sshd-session[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:22:26.214601 systemd-logind[1472]: New session 12 of user core. Apr 30 00:22:26.230818 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 00:22:26.352582 sshd[4048]: Connection closed by 10.0.0.1 port 37774 Apr 30 00:22:26.352983 sshd-session[4046]: pam_unix(sshd:session): session closed for user core Apr 30 00:22:26.357646 systemd[1]: sshd@11-10.0.0.119:22-10.0.0.1:37774.service: Deactivated successfully. Apr 30 00:22:26.360624 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 00:22:26.361392 systemd-logind[1472]: Session 12 logged out. Waiting for processes to exit. Apr 30 00:22:26.362433 systemd-logind[1472]: Removed session 12. Apr 30 00:22:31.366607 systemd[1]: Started sshd@12-10.0.0.119:22-10.0.0.1:37472.service - OpenSSH per-connection server daemon (10.0.0.1:37472). Apr 30 00:22:31.408747 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 37472 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:22:31.410388 sshd-session[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:22:31.414566 systemd-logind[1472]: New session 13 of user core. Apr 30 00:22:31.429605 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 00:22:31.542994 sshd[4063]: Connection closed by 10.0.0.1 port 37472 Apr 30 00:22:31.543411 sshd-session[4061]: pam_unix(sshd:session): session closed for user core Apr 30 00:22:31.547578 systemd[1]: sshd@12-10.0.0.119:22-10.0.0.1:37472.service: Deactivated successfully. Apr 30 00:22:31.549567 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 00:22:31.550371 systemd-logind[1472]: Session 13 logged out. Waiting for processes to exit. Apr 30 00:22:31.551363 systemd-logind[1472]: Removed session 13. Apr 30 00:22:36.555497 systemd[1]: Started sshd@13-10.0.0.119:22-10.0.0.1:37480.service - OpenSSH per-connection server daemon (10.0.0.1:37480). Apr 30 00:22:36.634403 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 37480 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:22:36.636137 sshd-session[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:22:36.640457 systemd-logind[1472]: New session 14 of user core. Apr 30 00:22:36.648597 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 00:22:36.848280 sshd[4081]: Connection closed by 10.0.0.1 port 37480 Apr 30 00:22:36.848585 sshd-session[4079]: pam_unix(sshd:session): session closed for user core Apr 30 00:22:36.852260 systemd[1]: sshd@13-10.0.0.119:22-10.0.0.1:37480.service: Deactivated successfully. Apr 30 00:22:36.854173 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 00:22:36.854793 systemd-logind[1472]: Session 14 logged out. Waiting for processes to exit. Apr 30 00:22:36.855844 systemd-logind[1472]: Removed session 14. Apr 30 00:22:41.861075 systemd[1]: Started sshd@14-10.0.0.119:22-10.0.0.1:53182.service - OpenSSH per-connection server daemon (10.0.0.1:53182). Apr 30 00:22:41.900494 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 53182 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:22:41.901970 sshd-session[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:22:41.905914 systemd-logind[1472]: New session 15 of user core. Apr 30 00:22:41.914590 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 00:22:42.039805 sshd[4096]: Connection closed by 10.0.0.1 port 53182 Apr 30 00:22:42.040337 sshd-session[4094]: pam_unix(sshd:session): session closed for user core Apr 30 00:22:42.056110 systemd[1]: sshd@14-10.0.0.119:22-10.0.0.1:53182.service: Deactivated successfully. Apr 30 00:22:42.058586 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 00:22:42.060615 systemd-logind[1472]: Session 15 logged out. Waiting for processes to exit. Apr 30 00:22:42.073881 systemd[1]: Started sshd@15-10.0.0.119:22-10.0.0.1:53190.service - OpenSSH per-connection server daemon (10.0.0.1:53190). Apr 30 00:22:42.075208 systemd-logind[1472]: Removed session 15. Apr 30 00:22:42.106578 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 53190 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:22:42.108224 sshd-session[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:22:42.112736 systemd-logind[1472]: New session 16 of user core. Apr 30 00:22:42.120607 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 00:22:42.617497 sshd[4111]: Connection closed by 10.0.0.1 port 53190 Apr 30 00:22:42.620646 sshd-session[4109]: pam_unix(sshd:session): session closed for user core Apr 30 00:22:42.629736 systemd[1]: sshd@15-10.0.0.119:22-10.0.0.1:53190.service: Deactivated successfully. Apr 30 00:22:42.635622 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 00:22:42.639539 systemd-logind[1472]: Session 16 logged out. Waiting for processes to exit. Apr 30 00:22:42.653914 systemd[1]: Started sshd@16-10.0.0.119:22-10.0.0.1:53206.service - OpenSSH per-connection server daemon (10.0.0.1:53206). Apr 30 00:22:42.655430 systemd-logind[1472]: Removed session 16. Apr 30 00:22:42.688505 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 53206 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:22:42.690123 sshd-session[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:22:42.694279 systemd-logind[1472]: New session 17 of user core. Apr 30 00:22:42.704583 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 00:22:43.029994 sshd[4123]: Connection closed by 10.0.0.1 port 53206 Apr 30 00:22:43.030268 sshd-session[4121]: pam_unix(sshd:session): session closed for user core Apr 30 00:22:43.034945 systemd[1]: sshd@16-10.0.0.119:22-10.0.0.1:53206.service: Deactivated successfully. Apr 30 00:22:43.037293 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 00:22:43.038086 systemd-logind[1472]: Session 17 logged out. Waiting for processes to exit. Apr 30 00:22:43.039061 systemd-logind[1472]: Removed session 17. Apr 30 00:22:46.428127 kubelet[2620]: E0430 00:22:46.428047 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:22:48.044539 systemd[1]: Started sshd@17-10.0.0.119:22-10.0.0.1:33158.service - OpenSSH per-connection server daemon (10.0.0.1:33158). Apr 30 00:22:48.080399 sshd[4139]: Accepted publickey for core from 10.0.0.1 port 33158 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:22:48.081788 sshd-session[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:22:48.085771 systemd-logind[1472]: New session 18 of user core. Apr 30 00:22:48.097649 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 00:22:48.233544 sshd[4141]: Connection closed by 10.0.0.1 port 33158 Apr 30 00:22:48.233923 sshd-session[4139]: pam_unix(sshd:session): session closed for user core Apr 30 00:22:48.238495 systemd[1]: sshd@17-10.0.0.119:22-10.0.0.1:33158.service: Deactivated successfully. Apr 30 00:22:48.240781 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 00:22:48.241427 systemd-logind[1472]: Session 18 logged out. Waiting for processes to exit. Apr 30 00:22:48.242289 systemd-logind[1472]: Removed session 18. Apr 30 00:22:53.246486 systemd[1]: Started sshd@18-10.0.0.119:22-10.0.0.1:33172.service - OpenSSH per-connection server daemon (10.0.0.1:33172). Apr 30 00:22:53.284270 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 33172 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:22:53.285849 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:22:53.290065 systemd-logind[1472]: New session 19 of user core. Apr 30 00:22:53.302605 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 00:22:53.451594 sshd[4156]: Connection closed by 10.0.0.1 port 33172 Apr 30 00:22:53.451995 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Apr 30 00:22:53.455657 systemd[1]: sshd@18-10.0.0.119:22-10.0.0.1:33172.service: Deactivated successfully. Apr 30 00:22:53.457555 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 00:22:53.458262 systemd-logind[1472]: Session 19 logged out. Waiting for processes to exit. Apr 30 00:22:53.459207 systemd-logind[1472]: Removed session 19. Apr 30 00:22:56.427783 kubelet[2620]: E0430 00:22:56.427715 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:22:58.465483 systemd[1]: Started sshd@19-10.0.0.119:22-10.0.0.1:40102.service - OpenSSH per-connection server daemon (10.0.0.1:40102). Apr 30 00:22:58.508591 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 40102 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:22:58.510269 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:22:58.515395 systemd-logind[1472]: New session 20 of user core. Apr 30 00:22:58.521608 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 00:22:58.631486 sshd[4170]: Connection closed by 10.0.0.1 port 40102 Apr 30 00:22:58.632679 sshd-session[4168]: pam_unix(sshd:session): session closed for user core Apr 30 00:22:58.637040 systemd[1]: sshd@19-10.0.0.119:22-10.0.0.1:40102.service: Deactivated successfully. Apr 30 00:22:58.639611 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 00:22:58.640351 systemd-logind[1472]: Session 20 logged out. Waiting for processes to exit. Apr 30 00:22:58.641333 systemd-logind[1472]: Removed session 20. Apr 30 00:23:02.428073 kubelet[2620]: E0430 00:23:02.428028 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:23:03.643837 systemd[1]: Started sshd@20-10.0.0.119:22-10.0.0.1:40106.service - OpenSSH per-connection server daemon (10.0.0.1:40106). Apr 30 00:23:03.681615 sshd[4183]: Accepted publickey for core from 10.0.0.1 port 40106 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:23:03.683194 sshd-session[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:23:03.686959 systemd-logind[1472]: New session 21 of user core. Apr 30 00:23:03.696570 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 00:23:03.851553 sshd[4185]: Connection closed by 10.0.0.1 port 40106 Apr 30 00:23:03.851999 sshd-session[4183]: pam_unix(sshd:session): session closed for user core Apr 30 00:23:03.863565 systemd[1]: sshd@20-10.0.0.119:22-10.0.0.1:40106.service: Deactivated successfully. Apr 30 00:23:03.865662 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 00:23:03.867582 systemd-logind[1472]: Session 21 logged out. Waiting for processes to exit. Apr 30 00:23:03.874707 systemd[1]: Started sshd@21-10.0.0.119:22-10.0.0.1:40118.service - OpenSSH per-connection server daemon (10.0.0.1:40118). Apr 30 00:23:03.875603 systemd-logind[1472]: Removed session 21. Apr 30 00:23:03.907061 sshd[4197]: Accepted publickey for core from 10.0.0.1 port 40118 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:23:03.908507 sshd-session[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:23:03.912508 systemd-logind[1472]: New session 22 of user core. Apr 30 00:23:03.924586 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 00:23:04.604234 sshd[4199]: Connection closed by 10.0.0.1 port 40118 Apr 30 00:23:04.604899 sshd-session[4197]: pam_unix(sshd:session): session closed for user core Apr 30 00:23:04.616582 systemd[1]: sshd@21-10.0.0.119:22-10.0.0.1:40118.service: Deactivated successfully. Apr 30 00:23:04.618595 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 00:23:04.620305 systemd-logind[1472]: Session 22 logged out. Waiting for processes to exit. Apr 30 00:23:04.631740 systemd[1]: Started sshd@22-10.0.0.119:22-10.0.0.1:40126.service - OpenSSH per-connection server daemon (10.0.0.1:40126). Apr 30 00:23:04.632949 systemd-logind[1472]: Removed session 22. Apr 30 00:23:04.670089 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 40126 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:23:04.672600 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:23:04.677459 systemd-logind[1472]: New session 23 of user core. Apr 30 00:23:04.689716 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 00:23:07.181753 sshd[4212]: Connection closed by 10.0.0.1 port 40126 Apr 30 00:23:07.182321 sshd-session[4210]: pam_unix(sshd:session): session closed for user core Apr 30 00:23:07.190805 systemd[1]: sshd@22-10.0.0.119:22-10.0.0.1:40126.service: Deactivated successfully. Apr 30 00:23:07.192958 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 00:23:07.194882 systemd-logind[1472]: Session 23 logged out. Waiting for processes to exit. Apr 30 00:23:07.202787 systemd[1]: Started sshd@23-10.0.0.119:22-10.0.0.1:48766.service - OpenSSH per-connection server daemon (10.0.0.1:48766). Apr 30 00:23:07.204503 systemd-logind[1472]: Removed session 23. Apr 30 00:23:07.236279 sshd[4229]: Accepted publickey for core from 10.0.0.1 port 48766 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:23:07.238038 sshd-session[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:23:07.242231 systemd-logind[1472]: New session 24 of user core. Apr 30 00:23:07.253750 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 00:23:08.027455 sshd[4231]: Connection closed by 10.0.0.1 port 48766 Apr 30 00:23:08.027955 sshd-session[4229]: pam_unix(sshd:session): session closed for user core Apr 30 00:23:08.040777 systemd[1]: sshd@23-10.0.0.119:22-10.0.0.1:48766.service: Deactivated successfully. Apr 30 00:23:08.042762 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 00:23:08.044145 systemd-logind[1472]: Session 24 logged out. Waiting for processes to exit. Apr 30 00:23:08.052770 systemd[1]: Started sshd@24-10.0.0.119:22-10.0.0.1:48770.service - OpenSSH per-connection server daemon (10.0.0.1:48770). Apr 30 00:23:08.053839 systemd-logind[1472]: Removed session 24. Apr 30 00:23:08.087843 sshd[4241]: Accepted publickey for core from 10.0.0.1 port 48770 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:23:08.089478 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:23:08.093920 systemd-logind[1472]: New session 25 of user core. Apr 30 00:23:08.100588 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 00:23:08.244684 sshd[4243]: Connection closed by 10.0.0.1 port 48770 Apr 30 00:23:08.245078 sshd-session[4241]: pam_unix(sshd:session): session closed for user core Apr 30 00:23:08.249433 systemd[1]: sshd@24-10.0.0.119:22-10.0.0.1:48770.service: Deactivated successfully. Apr 30 00:23:08.251550 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 00:23:08.252137 systemd-logind[1472]: Session 25 logged out. Waiting for processes to exit. Apr 30 00:23:08.253099 systemd-logind[1472]: Removed session 25. Apr 30 00:23:10.427998 kubelet[2620]: E0430 00:23:10.427939 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:23:13.261705 systemd[1]: Started sshd@25-10.0.0.119:22-10.0.0.1:48774.service - OpenSSH per-connection server daemon (10.0.0.1:48774). Apr 30 00:23:13.298606 sshd[4257]: Accepted publickey for core from 10.0.0.1 port 48774 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:23:13.299971 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:23:13.303671 systemd-logind[1472]: New session 26 of user core. Apr 30 00:23:13.310580 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 00:23:13.420253 sshd[4259]: Connection closed by 10.0.0.1 port 48774 Apr 30 00:23:13.420680 sshd-session[4257]: pam_unix(sshd:session): session closed for user core Apr 30 00:23:13.425002 systemd[1]: sshd@25-10.0.0.119:22-10.0.0.1:48774.service: Deactivated successfully. Apr 30 00:23:13.427539 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 00:23:13.428135 systemd-logind[1472]: Session 26 logged out. Waiting for processes to exit. Apr 30 00:23:13.428980 systemd-logind[1472]: Removed session 26. Apr 30 00:23:18.436947 systemd[1]: Started sshd@26-10.0.0.119:22-10.0.0.1:45858.service - OpenSSH per-connection server daemon (10.0.0.1:45858). Apr 30 00:23:18.477539 sshd[4272]: Accepted publickey for core from 10.0.0.1 port 45858 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:23:18.479081 sshd-session[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:23:18.484141 systemd-logind[1472]: New session 27 of user core. Apr 30 00:23:18.493656 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 00:23:18.602064 sshd[4274]: Connection closed by 10.0.0.1 port 45858 Apr 30 00:23:18.602429 sshd-session[4272]: pam_unix(sshd:session): session closed for user core Apr 30 00:23:18.606812 systemd[1]: sshd@26-10.0.0.119:22-10.0.0.1:45858.service: Deactivated successfully. Apr 30 00:23:18.609714 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 00:23:18.610550 systemd-logind[1472]: Session 27 logged out. Waiting for processes to exit. Apr 30 00:23:18.611653 systemd-logind[1472]: Removed session 27. Apr 30 00:23:23.434675 kubelet[2620]: E0430 00:23:23.431986 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:23:23.628888 systemd[1]: Started sshd@27-10.0.0.119:22-10.0.0.1:45866.service - OpenSSH per-connection server daemon (10.0.0.1:45866). Apr 30 00:23:23.715680 sshd[4290]: Accepted publickey for core from 10.0.0.1 port 45866 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:23:23.719073 sshd-session[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:23:23.736194 systemd-logind[1472]: New session 28 of user core. Apr 30 00:23:23.743736 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 30 00:23:23.863110 sshd[4292]: Connection closed by 10.0.0.1 port 45866 Apr 30 00:23:23.863546 sshd-session[4290]: pam_unix(sshd:session): session closed for user core Apr 30 00:23:23.868167 systemd[1]: sshd@27-10.0.0.119:22-10.0.0.1:45866.service: Deactivated successfully. Apr 30 00:23:23.871250 systemd[1]: session-28.scope: Deactivated successfully. Apr 30 00:23:23.872379 systemd-logind[1472]: Session 28 logged out. Waiting for processes to exit. Apr 30 00:23:23.873369 systemd-logind[1472]: Removed session 28. Apr 30 00:23:26.427755 kubelet[2620]: E0430 00:23:26.427679 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:23:28.874841 systemd[1]: Started sshd@28-10.0.0.119:22-10.0.0.1:44242.service - OpenSSH per-connection server daemon (10.0.0.1:44242). Apr 30 00:23:28.911995 sshd[4305]: Accepted publickey for core from 10.0.0.1 port 44242 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:23:28.913890 sshd-session[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:23:28.918152 systemd-logind[1472]: New session 29 of user core. Apr 30 00:23:28.926583 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 30 00:23:29.032571 sshd[4309]: Connection closed by 10.0.0.1 port 44242 Apr 30 00:23:29.032952 sshd-session[4305]: pam_unix(sshd:session): session closed for user core Apr 30 00:23:29.037280 systemd[1]: sshd@28-10.0.0.119:22-10.0.0.1:44242.service: Deactivated successfully. Apr 30 00:23:29.039425 systemd[1]: session-29.scope: Deactivated successfully. Apr 30 00:23:29.040182 systemd-logind[1472]: Session 29 logged out. Waiting for processes to exit. Apr 30 00:23:29.041107 systemd-logind[1472]: Removed session 29. Apr 30 00:23:34.045145 systemd[1]: Started sshd@29-10.0.0.119:22-10.0.0.1:44248.service - OpenSSH per-connection server daemon (10.0.0.1:44248). Apr 30 00:23:34.084888 sshd[4321]: Accepted publickey for core from 10.0.0.1 port 44248 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:23:34.086910 sshd-session[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:23:34.091420 systemd-logind[1472]: New session 30 of user core. Apr 30 00:23:34.100615 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 30 00:23:34.204551 sshd[4323]: Connection closed by 10.0.0.1 port 44248 Apr 30 00:23:34.204933 sshd-session[4321]: pam_unix(sshd:session): session closed for user core Apr 30 00:23:34.218952 systemd[1]: sshd@29-10.0.0.119:22-10.0.0.1:44248.service: Deactivated successfully. Apr 30 00:23:34.221487 systemd[1]: session-30.scope: Deactivated successfully. Apr 30 00:23:34.223420 systemd-logind[1472]: Session 30 logged out. Waiting for processes to exit. Apr 30 00:23:34.232833 systemd[1]: Started sshd@30-10.0.0.119:22-10.0.0.1:44264.service - OpenSSH per-connection server daemon (10.0.0.1:44264). Apr 30 00:23:34.234036 systemd-logind[1472]: Removed session 30. Apr 30 00:23:34.265727 sshd[4335]: Accepted publickey for core from 10.0.0.1 port 44264 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:23:34.267946 sshd-session[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:23:34.272566 systemd-logind[1472]: New session 31 of user core. Apr 30 00:23:34.282691 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 30 00:23:35.643472 containerd[1488]: time="2025-04-30T00:23:35.641125438Z" level=info msg="StopContainer for \"080dda6afcc638b428269ebbf9b6ce7ac5ce4aba0d3b681510af8517fbfd8529\" with timeout 30 (s)" Apr 30 00:23:35.643472 containerd[1488]: time="2025-04-30T00:23:35.642886219Z" level=info msg="Stop container \"080dda6afcc638b428269ebbf9b6ce7ac5ce4aba0d3b681510af8517fbfd8529\" with signal terminated" Apr 30 00:23:35.662172 systemd[1]: cri-containerd-080dda6afcc638b428269ebbf9b6ce7ac5ce4aba0d3b681510af8517fbfd8529.scope: Deactivated successfully. Apr 30 00:23:35.684494 containerd[1488]: time="2025-04-30T00:23:35.684429892Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:23:35.686936 containerd[1488]: time="2025-04-30T00:23:35.686833376Z" level=info msg="StopContainer for \"30c1d4af5d8a1a557f714e17245362fe1a2815e6af33c1188f8cf83b8672d60d\" with timeout 2 (s)" Apr 30 00:23:35.687078 containerd[1488]: time="2025-04-30T00:23:35.687062718Z" level=info msg="Stop container \"30c1d4af5d8a1a557f714e17245362fe1a2815e6af33c1188f8cf83b8672d60d\" with signal terminated" Apr 30 00:23:35.690399 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-080dda6afcc638b428269ebbf9b6ce7ac5ce4aba0d3b681510af8517fbfd8529-rootfs.mount: Deactivated successfully. Apr 30 00:23:35.696520 systemd-networkd[1421]: lxc_health: Link DOWN Apr 30 00:23:35.696529 systemd-networkd[1421]: lxc_health: Lost carrier Apr 30 00:23:35.699281 containerd[1488]: time="2025-04-30T00:23:35.699224448Z" level=info msg="shim disconnected" id=080dda6afcc638b428269ebbf9b6ce7ac5ce4aba0d3b681510af8517fbfd8529 namespace=k8s.io Apr 30 00:23:35.699281 containerd[1488]: time="2025-04-30T00:23:35.699278721Z" level=warning msg="cleaning up after shim disconnected" id=080dda6afcc638b428269ebbf9b6ce7ac5ce4aba0d3b681510af8517fbfd8529 namespace=k8s.io Apr 30 00:23:35.699403 containerd[1488]: time="2025-04-30T00:23:35.699286856Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:23:35.717945 containerd[1488]: time="2025-04-30T00:23:35.717796841Z" level=info msg="StopContainer for \"080dda6afcc638b428269ebbf9b6ce7ac5ce4aba0d3b681510af8517fbfd8529\" returns successfully" Apr 30 00:23:35.720620 systemd[1]: cri-containerd-30c1d4af5d8a1a557f714e17245362fe1a2815e6af33c1188f8cf83b8672d60d.scope: Deactivated successfully. Apr 30 00:23:35.720961 systemd[1]: cri-containerd-30c1d4af5d8a1a557f714e17245362fe1a2815e6af33c1188f8cf83b8672d60d.scope: Consumed 7.352s CPU time. Apr 30 00:23:35.722590 containerd[1488]: time="2025-04-30T00:23:35.722556810Z" level=info msg="StopPodSandbox for \"546f9d701111690bacd4d5fd94679dddc9f602ab6dcfc907b5409a865bc4d4f4\"" Apr 30 00:23:35.735868 containerd[1488]: time="2025-04-30T00:23:35.722604069Z" level=info msg="Container to stop \"080dda6afcc638b428269ebbf9b6ce7ac5ce4aba0d3b681510af8517fbfd8529\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:23:35.738128 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-546f9d701111690bacd4d5fd94679dddc9f602ab6dcfc907b5409a865bc4d4f4-shm.mount: Deactivated successfully. Apr 30 00:23:35.743568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30c1d4af5d8a1a557f714e17245362fe1a2815e6af33c1188f8cf83b8672d60d-rootfs.mount: Deactivated successfully. Apr 30 00:23:35.744399 systemd[1]: cri-containerd-546f9d701111690bacd4d5fd94679dddc9f602ab6dcfc907b5409a865bc4d4f4.scope: Deactivated successfully. Apr 30 00:23:35.758175 containerd[1488]: time="2025-04-30T00:23:35.757951062Z" level=info msg="shim disconnected" id=30c1d4af5d8a1a557f714e17245362fe1a2815e6af33c1188f8cf83b8672d60d namespace=k8s.io Apr 30 00:23:35.758175 containerd[1488]: time="2025-04-30T00:23:35.758009743Z" level=warning msg="cleaning up after shim disconnected" id=30c1d4af5d8a1a557f714e17245362fe1a2815e6af33c1188f8cf83b8672d60d namespace=k8s.io Apr 30 00:23:35.758175 containerd[1488]: time="2025-04-30T00:23:35.758020363Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:23:35.765173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-546f9d701111690bacd4d5fd94679dddc9f602ab6dcfc907b5409a865bc4d4f4-rootfs.mount: Deactivated successfully. Apr 30 00:23:35.767774 containerd[1488]: time="2025-04-30T00:23:35.766412263Z" level=info msg="shim disconnected" id=546f9d701111690bacd4d5fd94679dddc9f602ab6dcfc907b5409a865bc4d4f4 namespace=k8s.io Apr 30 00:23:35.767774 containerd[1488]: time="2025-04-30T00:23:35.766522511Z" level=warning msg="cleaning up after shim disconnected" id=546f9d701111690bacd4d5fd94679dddc9f602ab6dcfc907b5409a865bc4d4f4 namespace=k8s.io Apr 30 00:23:35.767774 containerd[1488]: time="2025-04-30T00:23:35.766533842Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:23:35.778803 containerd[1488]: time="2025-04-30T00:23:35.778661308Z" level=info msg="StopContainer for \"30c1d4af5d8a1a557f714e17245362fe1a2815e6af33c1188f8cf83b8672d60d\" returns successfully" Apr 30 00:23:35.779253 containerd[1488]: time="2025-04-30T00:23:35.779229379Z" level=info msg="StopPodSandbox for \"bd9934ab56694892871b9962eb511b76ea58daeac830df77996a8bbf643cc01c\"" Apr 30 00:23:35.779369 containerd[1488]: time="2025-04-30T00:23:35.779292519Z" level=info msg="Container to stop \"ddc432377fad2dfb534c159234ee40bca85f2e4e44f2be58fa55e599fa8c1a57\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:23:35.779407 containerd[1488]: time="2025-04-30T00:23:35.779366779Z" level=info msg="Container to stop \"352922f7af7af145b7d5b877d214e5588a33e0736378dcce337b0bdae6727272\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:23:35.779407 containerd[1488]: time="2025-04-30T00:23:35.779383109Z" level=info msg="Container to stop \"de44c4428149a533ca15c6d66c3ea24e5babf7e520923c4f33e35162143cf3c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:23:35.779407 containerd[1488]: time="2025-04-30T00:23:35.779394541Z" level=info msg="Container to stop \"30c1d4af5d8a1a557f714e17245362fe1a2815e6af33c1188f8cf83b8672d60d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:23:35.779493 containerd[1488]: time="2025-04-30T00:23:35.779405942Z" level=info msg="Container to stop \"bad50a2c10d6a3b3d1546b9f6b0500c4105cc7c2ac9af940eaf1b937b0850571\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:23:35.783289 containerd[1488]: time="2025-04-30T00:23:35.783244854Z" level=warning msg="cleanup warnings time=\"2025-04-30T00:23:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 00:23:35.784717 containerd[1488]: time="2025-04-30T00:23:35.784681173Z" level=info msg="TearDown network for sandbox \"546f9d701111690bacd4d5fd94679dddc9f602ab6dcfc907b5409a865bc4d4f4\" successfully" Apr 30 00:23:35.784717 containerd[1488]: time="2025-04-30T00:23:35.784704146Z" level=info msg="StopPodSandbox for \"546f9d701111690bacd4d5fd94679dddc9f602ab6dcfc907b5409a865bc4d4f4\" returns successfully" Apr 30 00:23:35.785786 systemd[1]: cri-containerd-bd9934ab56694892871b9962eb511b76ea58daeac830df77996a8bbf643cc01c.scope: Deactivated successfully. Apr 30 00:23:35.812204 containerd[1488]: time="2025-04-30T00:23:35.812078371Z" level=info msg="shim disconnected" id=bd9934ab56694892871b9962eb511b76ea58daeac830df77996a8bbf643cc01c namespace=k8s.io Apr 30 00:23:35.812204 containerd[1488]: time="2025-04-30T00:23:35.812143013Z" level=warning msg="cleaning up after shim disconnected" id=bd9934ab56694892871b9962eb511b76ea58daeac830df77996a8bbf643cc01c namespace=k8s.io Apr 30 00:23:35.812204 containerd[1488]: time="2025-04-30T00:23:35.812151189Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:23:35.827508 containerd[1488]: time="2025-04-30T00:23:35.827453883Z" level=info msg="TearDown network for sandbox \"bd9934ab56694892871b9962eb511b76ea58daeac830df77996a8bbf643cc01c\" successfully" Apr 30 00:23:35.827508 containerd[1488]: time="2025-04-30T00:23:35.827490523Z" level=info msg="StopPodSandbox for \"bd9934ab56694892871b9962eb511b76ea58daeac830df77996a8bbf643cc01c\" returns successfully" Apr 30 00:23:35.851792 kubelet[2620]: I0430 00:23:35.851739 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04ec286c-b961-4235-8aa7-fa7326f986a0-cilium-config-path\") pod \"04ec286c-b961-4235-8aa7-fa7326f986a0\" (UID: \"04ec286c-b961-4235-8aa7-fa7326f986a0\") " Apr 30 00:23:35.851792 kubelet[2620]: I0430 00:23:35.851781 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-host-proc-sys-kernel\") pod \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " Apr 30 00:23:35.851792 kubelet[2620]: I0430 00:23:35.851798 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kslq\" (UniqueName: \"kubernetes.io/projected/04ec286c-b961-4235-8aa7-fa7326f986a0-kube-api-access-5kslq\") pod \"04ec286c-b961-4235-8aa7-fa7326f986a0\" (UID: \"04ec286c-b961-4235-8aa7-fa7326f986a0\") " Apr 30 00:23:35.854741 kubelet[2620]: I0430 00:23:35.851812 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-cilium-run\") pod \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " Apr 30 00:23:35.854741 kubelet[2620]: I0430 00:23:35.851825 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-hostproc\") pod \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " Apr 30 00:23:35.854741 kubelet[2620]: I0430 00:23:35.851837 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-etc-cni-netd\") pod \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " Apr 30 00:23:35.854741 kubelet[2620]: I0430 00:23:35.851849 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-xtables-lock\") pod \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " Apr 30 00:23:35.854741 kubelet[2620]: I0430 00:23:35.851863 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-hubble-tls\") pod \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " Apr 30 00:23:35.854741 kubelet[2620]: I0430 00:23:35.851874 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-bpf-maps\") pod \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " Apr 30 00:23:35.854975 kubelet[2620]: I0430 00:23:35.851889 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4x2h\" (UniqueName: \"kubernetes.io/projected/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-kube-api-access-h4x2h\") pod \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " Apr 30 00:23:35.854975 kubelet[2620]: I0430 00:23:35.851903 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-lib-modules\") pod \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " Apr 30 00:23:35.854975 kubelet[2620]: I0430 00:23:35.851919 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-cilium-config-path\") pod \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " Apr 30 00:23:35.854975 kubelet[2620]: I0430 00:23:35.851932 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-cni-path\") pod \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " Apr 30 00:23:35.854975 kubelet[2620]: I0430 00:23:35.851945 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-cilium-cgroup\") pod \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " Apr 30 00:23:35.854975 kubelet[2620]: I0430 00:23:35.851960 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-clustermesh-secrets\") pod \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " Apr 30 00:23:35.855202 kubelet[2620]: I0430 00:23:35.851974 2620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-host-proc-sys-net\") pod \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\" (UID: \"8f539ef2-2fc8-413c-b20a-6d4c2cd67360\") " Apr 30 00:23:35.855202 kubelet[2620]: I0430 00:23:35.852018 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8f539ef2-2fc8-413c-b20a-6d4c2cd67360" (UID: "8f539ef2-2fc8-413c-b20a-6d4c2cd67360"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:23:35.855202 kubelet[2620]: I0430 00:23:35.852048 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8f539ef2-2fc8-413c-b20a-6d4c2cd67360" (UID: "8f539ef2-2fc8-413c-b20a-6d4c2cd67360"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:23:35.855202 kubelet[2620]: I0430 00:23:35.852063 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-hostproc" (OuterVolumeSpecName: "hostproc") pod "8f539ef2-2fc8-413c-b20a-6d4c2cd67360" (UID: "8f539ef2-2fc8-413c-b20a-6d4c2cd67360"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:23:35.855202 kubelet[2620]: I0430 00:23:35.852077 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8f539ef2-2fc8-413c-b20a-6d4c2cd67360" (UID: "8f539ef2-2fc8-413c-b20a-6d4c2cd67360"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:23:35.855418 kubelet[2620]: I0430 00:23:35.852090 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8f539ef2-2fc8-413c-b20a-6d4c2cd67360" (UID: "8f539ef2-2fc8-413c-b20a-6d4c2cd67360"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:23:35.855418 kubelet[2620]: I0430 00:23:35.852181 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8f539ef2-2fc8-413c-b20a-6d4c2cd67360" (UID: "8f539ef2-2fc8-413c-b20a-6d4c2cd67360"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:23:35.855418 kubelet[2620]: I0430 00:23:35.855166 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-cni-path" (OuterVolumeSpecName: "cni-path") pod "8f539ef2-2fc8-413c-b20a-6d4c2cd67360" (UID: "8f539ef2-2fc8-413c-b20a-6d4c2cd67360"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:23:35.855418 kubelet[2620]: I0430 00:23:35.855203 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8f539ef2-2fc8-413c-b20a-6d4c2cd67360" (UID: "8f539ef2-2fc8-413c-b20a-6d4c2cd67360"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:23:35.855418 kubelet[2620]: I0430 00:23:35.855294 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04ec286c-b961-4235-8aa7-fa7326f986a0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "04ec286c-b961-4235-8aa7-fa7326f986a0" (UID: "04ec286c-b961-4235-8aa7-fa7326f986a0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 00:23:35.855601 kubelet[2620]: I0430 00:23:35.855540 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8f539ef2-2fc8-413c-b20a-6d4c2cd67360" (UID: "8f539ef2-2fc8-413c-b20a-6d4c2cd67360"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:23:35.855862 kubelet[2620]: I0430 00:23:35.855834 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04ec286c-b961-4235-8aa7-fa7326f986a0-kube-api-access-5kslq" (OuterVolumeSpecName: "kube-api-access-5kslq") pod "04ec286c-b961-4235-8aa7-fa7326f986a0" (UID: "04ec286c-b961-4235-8aa7-fa7326f986a0"). InnerVolumeSpecName "kube-api-access-5kslq". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:23:35.855925 kubelet[2620]: I0430 00:23:35.855870 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8f539ef2-2fc8-413c-b20a-6d4c2cd67360" (UID: "8f539ef2-2fc8-413c-b20a-6d4c2cd67360"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:23:35.857766 kubelet[2620]: I0430 00:23:35.857730 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8f539ef2-2fc8-413c-b20a-6d4c2cd67360" (UID: "8f539ef2-2fc8-413c-b20a-6d4c2cd67360"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:23:35.857964 kubelet[2620]: I0430 00:23:35.857936 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8f539ef2-2fc8-413c-b20a-6d4c2cd67360" (UID: "8f539ef2-2fc8-413c-b20a-6d4c2cd67360"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 00:23:35.858275 kubelet[2620]: I0430 00:23:35.858254 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8f539ef2-2fc8-413c-b20a-6d4c2cd67360" (UID: "8f539ef2-2fc8-413c-b20a-6d4c2cd67360"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 00:23:35.858658 kubelet[2620]: I0430 00:23:35.858632 2620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-kube-api-access-h4x2h" (OuterVolumeSpecName: "kube-api-access-h4x2h") pod "8f539ef2-2fc8-413c-b20a-6d4c2cd67360" (UID: "8f539ef2-2fc8-413c-b20a-6d4c2cd67360"). InnerVolumeSpecName "kube-api-access-h4x2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:23:35.952412 kubelet[2620]: I0430 00:23:35.952309 2620 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 30 00:23:35.952412 kubelet[2620]: I0430 00:23:35.952332 2620 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5kslq\" (UniqueName: \"kubernetes.io/projected/04ec286c-b961-4235-8aa7-fa7326f986a0-kube-api-access-5kslq\") on node \"localhost\" DevicePath \"\"" Apr 30 00:23:35.952412 kubelet[2620]: I0430 00:23:35.952343 2620 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 30 00:23:35.952412 kubelet[2620]: I0430 00:23:35.952351 2620 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 30 00:23:35.952412 kubelet[2620]: I0430 00:23:35.952360 2620 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 30 00:23:35.952412 kubelet[2620]: I0430 00:23:35.952368 2620 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 30 00:23:35.952412 kubelet[2620]: I0430 00:23:35.952376 2620 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 30 00:23:35.952412 kubelet[2620]: I0430 00:23:35.952383 2620 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 30 00:23:35.952754 kubelet[2620]: I0430 00:23:35.952392 2620 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-h4x2h\" (UniqueName: \"kubernetes.io/projected/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-kube-api-access-h4x2h\") on node \"localhost\" DevicePath \"\"" Apr 30 00:23:35.952754 kubelet[2620]: I0430 00:23:35.952401 2620 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 30 00:23:35.952754 kubelet[2620]: I0430 00:23:35.952408 2620 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 30 00:23:35.952754 kubelet[2620]: I0430 00:23:35.952416 2620 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 30 00:23:35.952754 kubelet[2620]: I0430 00:23:35.952424 2620 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 30 00:23:35.952754 kubelet[2620]: I0430 00:23:35.952431 2620 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 30 00:23:35.952754 kubelet[2620]: I0430 00:23:35.952439 2620 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8f539ef2-2fc8-413c-b20a-6d4c2cd67360-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 30 00:23:35.952754 kubelet[2620]: I0430 00:23:35.952462 2620 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04ec286c-b961-4235-8aa7-fa7326f986a0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 30 00:23:36.435526 systemd[1]: Removed slice kubepods-besteffort-pod04ec286c_b961_4235_8aa7_fa7326f986a0.slice - libcontainer container kubepods-besteffort-pod04ec286c_b961_4235_8aa7_fa7326f986a0.slice. Apr 30 00:23:36.436711 systemd[1]: Removed slice kubepods-burstable-pod8f539ef2_2fc8_413c_b20a_6d4c2cd67360.slice - libcontainer container kubepods-burstable-pod8f539ef2_2fc8_413c_b20a_6d4c2cd67360.slice. Apr 30 00:23:36.436820 systemd[1]: kubepods-burstable-pod8f539ef2_2fc8_413c_b20a_6d4c2cd67360.slice: Consumed 7.487s CPU time. Apr 30 00:23:36.655669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd9934ab56694892871b9962eb511b76ea58daeac830df77996a8bbf643cc01c-rootfs.mount: Deactivated successfully. Apr 30 00:23:36.655810 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bd9934ab56694892871b9962eb511b76ea58daeac830df77996a8bbf643cc01c-shm.mount: Deactivated successfully. Apr 30 00:23:36.655920 systemd[1]: var-lib-kubelet-pods-04ec286c\x2db961\x2d4235\x2d8aa7\x2dfa7326f986a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5kslq.mount: Deactivated successfully. Apr 30 00:23:36.656025 systemd[1]: var-lib-kubelet-pods-8f539ef2\x2d2fc8\x2d413c\x2db20a\x2d6d4c2cd67360-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh4x2h.mount: Deactivated successfully. Apr 30 00:23:36.656142 systemd[1]: var-lib-kubelet-pods-8f539ef2\x2d2fc8\x2d413c\x2db20a\x2d6d4c2cd67360-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 00:23:36.656267 systemd[1]: var-lib-kubelet-pods-8f539ef2\x2d2fc8\x2d413c\x2db20a\x2d6d4c2cd67360-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 00:23:36.726474 kubelet[2620]: I0430 00:23:36.726312 2620 scope.go:117] "RemoveContainer" containerID="080dda6afcc638b428269ebbf9b6ce7ac5ce4aba0d3b681510af8517fbfd8529" Apr 30 00:23:36.734575 containerd[1488]: time="2025-04-30T00:23:36.734529229Z" level=info msg="RemoveContainer for \"080dda6afcc638b428269ebbf9b6ce7ac5ce4aba0d3b681510af8517fbfd8529\"" Apr 30 00:23:36.739293 containerd[1488]: time="2025-04-30T00:23:36.739234865Z" level=info msg="RemoveContainer for \"080dda6afcc638b428269ebbf9b6ce7ac5ce4aba0d3b681510af8517fbfd8529\" returns successfully" Apr 30 00:23:36.739603 kubelet[2620]: I0430 00:23:36.739572 2620 scope.go:117] "RemoveContainer" containerID="30c1d4af5d8a1a557f714e17245362fe1a2815e6af33c1188f8cf83b8672d60d" Apr 30 00:23:36.741229 containerd[1488]: time="2025-04-30T00:23:36.741190984Z" level=info msg="RemoveContainer for \"30c1d4af5d8a1a557f714e17245362fe1a2815e6af33c1188f8cf83b8672d60d\"" Apr 30 00:23:36.745228 containerd[1488]: time="2025-04-30T00:23:36.745182663Z" level=info msg="RemoveContainer for \"30c1d4af5d8a1a557f714e17245362fe1a2815e6af33c1188f8cf83b8672d60d\" returns successfully" Apr 30 00:23:36.745436 kubelet[2620]: I0430 00:23:36.745394 2620 scope.go:117] "RemoveContainer" containerID="bad50a2c10d6a3b3d1546b9f6b0500c4105cc7c2ac9af940eaf1b937b0850571" Apr 30 00:23:36.746418 containerd[1488]: time="2025-04-30T00:23:36.746383558Z" level=info msg="RemoveContainer for \"bad50a2c10d6a3b3d1546b9f6b0500c4105cc7c2ac9af940eaf1b937b0850571\"" Apr 30 00:23:36.750152 containerd[1488]: time="2025-04-30T00:23:36.750087474Z" level=info msg="RemoveContainer for \"bad50a2c10d6a3b3d1546b9f6b0500c4105cc7c2ac9af940eaf1b937b0850571\" returns successfully" Apr 30 00:23:36.750378 kubelet[2620]: I0430 00:23:36.750344 2620 scope.go:117] "RemoveContainer" containerID="de44c4428149a533ca15c6d66c3ea24e5babf7e520923c4f33e35162143cf3c5" Apr 30 00:23:36.752026 containerd[1488]: time="2025-04-30T00:23:36.751993249Z" level=info msg="RemoveContainer for \"de44c4428149a533ca15c6d66c3ea24e5babf7e520923c4f33e35162143cf3c5\"" Apr 30 00:23:36.760078 containerd[1488]: time="2025-04-30T00:23:36.760031090Z" level=info msg="RemoveContainer for \"de44c4428149a533ca15c6d66c3ea24e5babf7e520923c4f33e35162143cf3c5\" returns successfully" Apr 30 00:23:36.760381 kubelet[2620]: I0430 00:23:36.760291 2620 scope.go:117] "RemoveContainer" containerID="352922f7af7af145b7d5b877d214e5588a33e0736378dcce337b0bdae6727272" Apr 30 00:23:36.761401 containerd[1488]: time="2025-04-30T00:23:36.761334328Z" level=info msg="RemoveContainer for \"352922f7af7af145b7d5b877d214e5588a33e0736378dcce337b0bdae6727272\"" Apr 30 00:23:36.764583 containerd[1488]: time="2025-04-30T00:23:36.764545634Z" level=info msg="RemoveContainer for \"352922f7af7af145b7d5b877d214e5588a33e0736378dcce337b0bdae6727272\" returns successfully" Apr 30 00:23:36.764764 kubelet[2620]: I0430 00:23:36.764732 2620 scope.go:117] "RemoveContainer" containerID="ddc432377fad2dfb534c159234ee40bca85f2e4e44f2be58fa55e599fa8c1a57" Apr 30 00:23:36.765913 containerd[1488]: time="2025-04-30T00:23:36.765868359Z" level=info msg="RemoveContainer for \"ddc432377fad2dfb534c159234ee40bca85f2e4e44f2be58fa55e599fa8c1a57\"" Apr 30 00:23:36.769376 containerd[1488]: time="2025-04-30T00:23:36.769343493Z" level=info msg="RemoveContainer for \"ddc432377fad2dfb534c159234ee40bca85f2e4e44f2be58fa55e599fa8c1a57\" returns successfully" Apr 30 00:23:36.769637 kubelet[2620]: I0430 00:23:36.769590 2620 scope.go:117] "RemoveContainer" containerID="30c1d4af5d8a1a557f714e17245362fe1a2815e6af33c1188f8cf83b8672d60d" Apr 30 00:23:36.769865 containerd[1488]: time="2025-04-30T00:23:36.769805675Z" level=error msg="ContainerStatus for \"30c1d4af5d8a1a557f714e17245362fe1a2815e6af33c1188f8cf83b8672d60d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"30c1d4af5d8a1a557f714e17245362fe1a2815e6af33c1188f8cf83b8672d60d\": not found" Apr 30 00:23:36.770057 kubelet[2620]: E0430 00:23:36.770017 2620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"30c1d4af5d8a1a557f714e17245362fe1a2815e6af33c1188f8cf83b8672d60d\": not found" containerID="30c1d4af5d8a1a557f714e17245362fe1a2815e6af33c1188f8cf83b8672d60d" Apr 30 00:23:36.770202 kubelet[2620]: I0430 00:23:36.770073 2620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"30c1d4af5d8a1a557f714e17245362fe1a2815e6af33c1188f8cf83b8672d60d"} err="failed to get container status \"30c1d4af5d8a1a557f714e17245362fe1a2815e6af33c1188f8cf83b8672d60d\": rpc error: code = NotFound desc = an error occurred when try to find container \"30c1d4af5d8a1a557f714e17245362fe1a2815e6af33c1188f8cf83b8672d60d\": not found" Apr 30 00:23:36.770240 kubelet[2620]: I0430 00:23:36.770205 2620 scope.go:117] "RemoveContainer" containerID="bad50a2c10d6a3b3d1546b9f6b0500c4105cc7c2ac9af940eaf1b937b0850571" Apr 30 00:23:36.770494 containerd[1488]: time="2025-04-30T00:23:36.770438719Z" level=error msg="ContainerStatus for \"bad50a2c10d6a3b3d1546b9f6b0500c4105cc7c2ac9af940eaf1b937b0850571\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bad50a2c10d6a3b3d1546b9f6b0500c4105cc7c2ac9af940eaf1b937b0850571\": not found" Apr 30 00:23:36.770596 kubelet[2620]: E0430 00:23:36.770578 2620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bad50a2c10d6a3b3d1546b9f6b0500c4105cc7c2ac9af940eaf1b937b0850571\": not found" containerID="bad50a2c10d6a3b3d1546b9f6b0500c4105cc7c2ac9af940eaf1b937b0850571" Apr 30 00:23:36.770651 kubelet[2620]: I0430 00:23:36.770601 2620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bad50a2c10d6a3b3d1546b9f6b0500c4105cc7c2ac9af940eaf1b937b0850571"} err="failed to get container status \"bad50a2c10d6a3b3d1546b9f6b0500c4105cc7c2ac9af940eaf1b937b0850571\": rpc error: code = NotFound desc = an error occurred when try to find container \"bad50a2c10d6a3b3d1546b9f6b0500c4105cc7c2ac9af940eaf1b937b0850571\": not found" Apr 30 00:23:36.770651 kubelet[2620]: I0430 00:23:36.770621 2620 scope.go:117] "RemoveContainer" containerID="de44c4428149a533ca15c6d66c3ea24e5babf7e520923c4f33e35162143cf3c5" Apr 30 00:23:36.770823 containerd[1488]: time="2025-04-30T00:23:36.770790032Z" level=error msg="ContainerStatus for \"de44c4428149a533ca15c6d66c3ea24e5babf7e520923c4f33e35162143cf3c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"de44c4428149a533ca15c6d66c3ea24e5babf7e520923c4f33e35162143cf3c5\": not found" Apr 30 00:23:36.770927 kubelet[2620]: E0430 00:23:36.770902 2620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"de44c4428149a533ca15c6d66c3ea24e5babf7e520923c4f33e35162143cf3c5\": not found" containerID="de44c4428149a533ca15c6d66c3ea24e5babf7e520923c4f33e35162143cf3c5" Apr 30 00:23:36.770972 kubelet[2620]: I0430 00:23:36.770930 2620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"de44c4428149a533ca15c6d66c3ea24e5babf7e520923c4f33e35162143cf3c5"} err="failed to get container status \"de44c4428149a533ca15c6d66c3ea24e5babf7e520923c4f33e35162143cf3c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"de44c4428149a533ca15c6d66c3ea24e5babf7e520923c4f33e35162143cf3c5\": not found" Apr 30 00:23:36.770972 kubelet[2620]: I0430 00:23:36.770951 2620 scope.go:117] "RemoveContainer" containerID="352922f7af7af145b7d5b877d214e5588a33e0736378dcce337b0bdae6727272" Apr 30 00:23:36.771214 containerd[1488]: time="2025-04-30T00:23:36.771157906Z" level=error msg="ContainerStatus for \"352922f7af7af145b7d5b877d214e5588a33e0736378dcce337b0bdae6727272\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"352922f7af7af145b7d5b877d214e5588a33e0736378dcce337b0bdae6727272\": not found" Apr 30 00:23:36.771385 kubelet[2620]: E0430 00:23:36.771339 2620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"352922f7af7af145b7d5b877d214e5588a33e0736378dcce337b0bdae6727272\": not found" containerID="352922f7af7af145b7d5b877d214e5588a33e0736378dcce337b0bdae6727272" Apr 30 00:23:36.771429 kubelet[2620]: I0430 00:23:36.771378 2620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"352922f7af7af145b7d5b877d214e5588a33e0736378dcce337b0bdae6727272"} err="failed to get container status \"352922f7af7af145b7d5b877d214e5588a33e0736378dcce337b0bdae6727272\": rpc error: code = NotFound desc = an error occurred when try to find container \"352922f7af7af145b7d5b877d214e5588a33e0736378dcce337b0bdae6727272\": not found" Apr 30 00:23:36.771429 kubelet[2620]: I0430 00:23:36.771402 2620 scope.go:117] "RemoveContainer" containerID="ddc432377fad2dfb534c159234ee40bca85f2e4e44f2be58fa55e599fa8c1a57" Apr 30 00:23:36.771693 containerd[1488]: time="2025-04-30T00:23:36.771612273Z" level=error msg="ContainerStatus for \"ddc432377fad2dfb534c159234ee40bca85f2e4e44f2be58fa55e599fa8c1a57\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ddc432377fad2dfb534c159234ee40bca85f2e4e44f2be58fa55e599fa8c1a57\": not found" Apr 30 00:23:36.771782 kubelet[2620]: E0430 00:23:36.771755 2620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ddc432377fad2dfb534c159234ee40bca85f2e4e44f2be58fa55e599fa8c1a57\": not found" containerID="ddc432377fad2dfb534c159234ee40bca85f2e4e44f2be58fa55e599fa8c1a57" Apr 30 00:23:36.771834 kubelet[2620]: I0430 00:23:36.771781 2620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ddc432377fad2dfb534c159234ee40bca85f2e4e44f2be58fa55e599fa8c1a57"} err="failed to get container status \"ddc432377fad2dfb534c159234ee40bca85f2e4e44f2be58fa55e599fa8c1a57\": rpc error: code = NotFound desc = an error occurred when try to find container \"ddc432377fad2dfb534c159234ee40bca85f2e4e44f2be58fa55e599fa8c1a57\": not found" Apr 30 00:23:37.599808 sshd[4337]: Connection closed by 10.0.0.1 port 44264 Apr 30 00:23:37.600229 sshd-session[4335]: pam_unix(sshd:session): session closed for user core Apr 30 00:23:37.612639 systemd[1]: sshd@30-10.0.0.119:22-10.0.0.1:44264.service: Deactivated successfully. Apr 30 00:23:37.614740 systemd[1]: session-31.scope: Deactivated successfully. Apr 30 00:23:37.616429 systemd-logind[1472]: Session 31 logged out. Waiting for processes to exit. Apr 30 00:23:37.618080 systemd[1]: Started sshd@31-10.0.0.119:22-10.0.0.1:36934.service - OpenSSH per-connection server daemon (10.0.0.1:36934). Apr 30 00:23:37.618945 systemd-logind[1472]: Removed session 31. Apr 30 00:23:37.660698 sshd[4496]: Accepted publickey for core from 10.0.0.1 port 36934 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:23:37.662278 sshd-session[4496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:23:37.666376 systemd-logind[1472]: New session 32 of user core. Apr 30 00:23:37.673592 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 30 00:23:38.403955 sshd[4498]: Connection closed by 10.0.0.1 port 36934 Apr 30 00:23:38.404351 sshd-session[4496]: pam_unix(sshd:session): session closed for user core Apr 30 00:23:38.414814 systemd[1]: sshd@31-10.0.0.119:22-10.0.0.1:36934.service: Deactivated successfully. Apr 30 00:23:38.418523 systemd[1]: session-32.scope: Deactivated successfully. Apr 30 00:23:38.421222 systemd-logind[1472]: Session 32 logged out. Waiting for processes to exit. Apr 30 00:23:38.429793 kubelet[2620]: I0430 00:23:38.429755 2620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04ec286c-b961-4235-8aa7-fa7326f986a0" path="/var/lib/kubelet/pods/04ec286c-b961-4235-8aa7-fa7326f986a0/volumes" Apr 30 00:23:38.436757 kubelet[2620]: I0430 00:23:38.430366 2620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f539ef2-2fc8-413c-b20a-6d4c2cd67360" path="/var/lib/kubelet/pods/8f539ef2-2fc8-413c-b20a-6d4c2cd67360/volumes" Apr 30 00:23:38.436757 kubelet[2620]: E0430 00:23:38.431695 2620 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8f539ef2-2fc8-413c-b20a-6d4c2cd67360" containerName="mount-bpf-fs" Apr 30 00:23:38.436757 kubelet[2620]: E0430 00:23:38.431710 2620 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="04ec286c-b961-4235-8aa7-fa7326f986a0" containerName="cilium-operator" Apr 30 00:23:38.436757 kubelet[2620]: E0430 00:23:38.431718 2620 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8f539ef2-2fc8-413c-b20a-6d4c2cd67360" containerName="mount-cgroup" Apr 30 00:23:38.436757 kubelet[2620]: E0430 00:23:38.431725 2620 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8f539ef2-2fc8-413c-b20a-6d4c2cd67360" containerName="apply-sysctl-overwrites" Apr 30 00:23:38.436757 kubelet[2620]: E0430 00:23:38.431731 2620 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8f539ef2-2fc8-413c-b20a-6d4c2cd67360" containerName="clean-cilium-state" Apr 30 00:23:38.436757 kubelet[2620]: E0430 00:23:38.431737 2620 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8f539ef2-2fc8-413c-b20a-6d4c2cd67360" containerName="cilium-agent" Apr 30 00:23:38.436757 kubelet[2620]: I0430 00:23:38.431759 2620 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f539ef2-2fc8-413c-b20a-6d4c2cd67360" containerName="cilium-agent" Apr 30 00:23:38.436757 kubelet[2620]: I0430 00:23:38.431767 2620 memory_manager.go:354] "RemoveStaleState removing state" podUID="04ec286c-b961-4235-8aa7-fa7326f986a0" containerName="cilium-operator" Apr 30 00:23:38.432960 systemd[1]: Started sshd@32-10.0.0.119:22-10.0.0.1:36940.service - OpenSSH per-connection server daemon (10.0.0.1:36940). Apr 30 00:23:38.436179 systemd-logind[1472]: Removed session 32. Apr 30 00:23:38.442093 systemd[1]: Created slice kubepods-burstable-pod92699844_5854_45c6_b966_fd9ab372c8fe.slice - libcontainer container kubepods-burstable-pod92699844_5854_45c6_b966_fd9ab372c8fe.slice. Apr 30 00:23:38.467469 kubelet[2620]: I0430 00:23:38.467084 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/92699844-5854-45c6-b966-fd9ab372c8fe-cilium-ipsec-secrets\") pod \"cilium-tmwnx\" (UID: \"92699844-5854-45c6-b966-fd9ab372c8fe\") " pod="kube-system/cilium-tmwnx" Apr 30 00:23:38.467469 kubelet[2620]: I0430 00:23:38.467128 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92699844-5854-45c6-b966-fd9ab372c8fe-cilium-config-path\") pod \"cilium-tmwnx\" (UID: \"92699844-5854-45c6-b966-fd9ab372c8fe\") " pod="kube-system/cilium-tmwnx" Apr 30 00:23:38.467469 kubelet[2620]: I0430 00:23:38.467157 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/92699844-5854-45c6-b966-fd9ab372c8fe-cilium-cgroup\") pod \"cilium-tmwnx\" (UID: \"92699844-5854-45c6-b966-fd9ab372c8fe\") " pod="kube-system/cilium-tmwnx" Apr 30 00:23:38.467469 kubelet[2620]: I0430 00:23:38.467174 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/92699844-5854-45c6-b966-fd9ab372c8fe-cni-path\") pod \"cilium-tmwnx\" (UID: \"92699844-5854-45c6-b966-fd9ab372c8fe\") " pod="kube-system/cilium-tmwnx" Apr 30 00:23:38.467469 kubelet[2620]: I0430 00:23:38.467188 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92699844-5854-45c6-b966-fd9ab372c8fe-xtables-lock\") pod \"cilium-tmwnx\" (UID: \"92699844-5854-45c6-b966-fd9ab372c8fe\") " pod="kube-system/cilium-tmwnx" Apr 30 00:23:38.467469 kubelet[2620]: I0430 00:23:38.467202 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/92699844-5854-45c6-b966-fd9ab372c8fe-host-proc-sys-net\") pod \"cilium-tmwnx\" (UID: \"92699844-5854-45c6-b966-fd9ab372c8fe\") " pod="kube-system/cilium-tmwnx" Apr 30 00:23:38.467747 kubelet[2620]: I0430 00:23:38.467215 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/92699844-5854-45c6-b966-fd9ab372c8fe-bpf-maps\") pod \"cilium-tmwnx\" (UID: \"92699844-5854-45c6-b966-fd9ab372c8fe\") " pod="kube-system/cilium-tmwnx" Apr 30 00:23:38.467747 kubelet[2620]: I0430 00:23:38.467229 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/92699844-5854-45c6-b966-fd9ab372c8fe-host-proc-sys-kernel\") pod \"cilium-tmwnx\" (UID: \"92699844-5854-45c6-b966-fd9ab372c8fe\") " pod="kube-system/cilium-tmwnx" Apr 30 00:23:38.467747 kubelet[2620]: I0430 00:23:38.467242 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/92699844-5854-45c6-b966-fd9ab372c8fe-hubble-tls\") pod \"cilium-tmwnx\" (UID: \"92699844-5854-45c6-b966-fd9ab372c8fe\") " pod="kube-system/cilium-tmwnx" Apr 30 00:23:38.467747 kubelet[2620]: I0430 00:23:38.467255 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjrs5\" (UniqueName: \"kubernetes.io/projected/92699844-5854-45c6-b966-fd9ab372c8fe-kube-api-access-wjrs5\") pod \"cilium-tmwnx\" (UID: \"92699844-5854-45c6-b966-fd9ab372c8fe\") " pod="kube-system/cilium-tmwnx" Apr 30 00:23:38.467747 kubelet[2620]: I0430 00:23:38.467269 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/92699844-5854-45c6-b966-fd9ab372c8fe-etc-cni-netd\") pod \"cilium-tmwnx\" (UID: \"92699844-5854-45c6-b966-fd9ab372c8fe\") " pod="kube-system/cilium-tmwnx" Apr 30 00:23:38.467747 kubelet[2620]: I0430 00:23:38.467289 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/92699844-5854-45c6-b966-fd9ab372c8fe-clustermesh-secrets\") pod \"cilium-tmwnx\" (UID: \"92699844-5854-45c6-b966-fd9ab372c8fe\") " pod="kube-system/cilium-tmwnx" Apr 30 00:23:38.467955 kubelet[2620]: I0430 00:23:38.467303 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/92699844-5854-45c6-b966-fd9ab372c8fe-cilium-run\") pod \"cilium-tmwnx\" (UID: \"92699844-5854-45c6-b966-fd9ab372c8fe\") " pod="kube-system/cilium-tmwnx" Apr 30 00:23:38.467955 kubelet[2620]: I0430 00:23:38.467322 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/92699844-5854-45c6-b966-fd9ab372c8fe-hostproc\") pod \"cilium-tmwnx\" (UID: \"92699844-5854-45c6-b966-fd9ab372c8fe\") " pod="kube-system/cilium-tmwnx" Apr 30 00:23:38.467955 kubelet[2620]: I0430 00:23:38.467336 2620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92699844-5854-45c6-b966-fd9ab372c8fe-lib-modules\") pod \"cilium-tmwnx\" (UID: \"92699844-5854-45c6-b966-fd9ab372c8fe\") " pod="kube-system/cilium-tmwnx" Apr 30 00:23:38.475778 sshd[4509]: Accepted publickey for core from 10.0.0.1 port 36940 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:23:38.477561 sshd-session[4509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:23:38.484386 systemd-logind[1472]: New session 33 of user core. Apr 30 00:23:38.493743 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 30 00:23:38.550542 sshd[4511]: Connection closed by 10.0.0.1 port 36940 Apr 30 00:23:38.551020 sshd-session[4509]: pam_unix(sshd:session): session closed for user core Apr 30 00:23:38.571614 systemd[1]: sshd@32-10.0.0.119:22-10.0.0.1:36940.service: Deactivated successfully. Apr 30 00:23:38.589807 systemd[1]: session-33.scope: Deactivated successfully. Apr 30 00:23:38.591928 systemd-logind[1472]: Session 33 logged out. Waiting for processes to exit. Apr 30 00:23:38.598966 systemd[1]: Started sshd@33-10.0.0.119:22-10.0.0.1:36946.service - OpenSSH per-connection server daemon (10.0.0.1:36946). Apr 30 00:23:38.600151 systemd-logind[1472]: Removed session 33. Apr 30 00:23:38.631408 sshd[4521]: Accepted publickey for core from 10.0.0.1 port 36946 ssh2: RSA SHA256:2r4iv0MjZWHjsrxa76ksqOJasVZsJgfddk3h7cCSwDE Apr 30 00:23:38.633321 sshd-session[4521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:23:38.638180 systemd-logind[1472]: New session 34 of user core. Apr 30 00:23:38.647656 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 30 00:23:38.745195 kubelet[2620]: E0430 00:23:38.745004 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:23:38.746018 containerd[1488]: time="2025-04-30T00:23:38.745906525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tmwnx,Uid:92699844-5854-45c6-b966-fd9ab372c8fe,Namespace:kube-system,Attempt:0,}" Apr 30 00:23:38.778769 containerd[1488]: time="2025-04-30T00:23:38.778493972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:23:38.778769 containerd[1488]: time="2025-04-30T00:23:38.778635951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:23:38.778769 containerd[1488]: time="2025-04-30T00:23:38.778652943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:23:38.779223 containerd[1488]: time="2025-04-30T00:23:38.779055021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:23:38.811852 systemd[1]: Started cri-containerd-9a564fcdc4e038ce37a97ad53106011e6737996541df1f685f13876887cbaff7.scope - libcontainer container 9a564fcdc4e038ce37a97ad53106011e6737996541df1f685f13876887cbaff7. Apr 30 00:23:38.847157 containerd[1488]: time="2025-04-30T00:23:38.847090261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tmwnx,Uid:92699844-5854-45c6-b966-fd9ab372c8fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a564fcdc4e038ce37a97ad53106011e6737996541df1f685f13876887cbaff7\"" Apr 30 00:23:38.848154 kubelet[2620]: E0430 00:23:38.848110 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:23:38.851034 containerd[1488]: time="2025-04-30T00:23:38.850982240Z" level=info msg="CreateContainer within sandbox \"9a564fcdc4e038ce37a97ad53106011e6737996541df1f685f13876887cbaff7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 00:23:38.913084 containerd[1488]: time="2025-04-30T00:23:38.913003199Z" level=info msg="CreateContainer within sandbox \"9a564fcdc4e038ce37a97ad53106011e6737996541df1f685f13876887cbaff7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1ec13a5197a3f0c9e45ff0b39fc5dbafe713fe3e3b855fcd860df9a81e713dd9\"" Apr 30 00:23:38.914085 containerd[1488]: time="2025-04-30T00:23:38.914057898Z" level=info msg="StartContainer for \"1ec13a5197a3f0c9e45ff0b39fc5dbafe713fe3e3b855fcd860df9a81e713dd9\"" Apr 30 00:23:38.950835 systemd[1]: Started cri-containerd-1ec13a5197a3f0c9e45ff0b39fc5dbafe713fe3e3b855fcd860df9a81e713dd9.scope - libcontainer container 1ec13a5197a3f0c9e45ff0b39fc5dbafe713fe3e3b855fcd860df9a81e713dd9. Apr 30 00:23:39.016058 systemd[1]: cri-containerd-1ec13a5197a3f0c9e45ff0b39fc5dbafe713fe3e3b855fcd860df9a81e713dd9.scope: Deactivated successfully. Apr 30 00:23:39.047846 containerd[1488]: time="2025-04-30T00:23:39.047783942Z" level=info msg="StartContainer for \"1ec13a5197a3f0c9e45ff0b39fc5dbafe713fe3e3b855fcd860df9a81e713dd9\" returns successfully" Apr 30 00:23:39.198701 containerd[1488]: time="2025-04-30T00:23:39.198626796Z" level=info msg="shim disconnected" id=1ec13a5197a3f0c9e45ff0b39fc5dbafe713fe3e3b855fcd860df9a81e713dd9 namespace=k8s.io Apr 30 00:23:39.198701 containerd[1488]: time="2025-04-30T00:23:39.198691418Z" level=warning msg="cleaning up after shim disconnected" id=1ec13a5197a3f0c9e45ff0b39fc5dbafe713fe3e3b855fcd860df9a81e713dd9 namespace=k8s.io Apr 30 00:23:39.198701 containerd[1488]: time="2025-04-30T00:23:39.198706146Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:23:39.483481 kubelet[2620]: E0430 00:23:39.483392 2620 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 00:23:39.740057 kubelet[2620]: E0430 00:23:39.739927 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:23:39.743991 containerd[1488]: time="2025-04-30T00:23:39.743903597Z" level=info msg="CreateContainer within sandbox \"9a564fcdc4e038ce37a97ad53106011e6737996541df1f685f13876887cbaff7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 00:23:39.761261 containerd[1488]: time="2025-04-30T00:23:39.761192512Z" level=info msg="CreateContainer within sandbox \"9a564fcdc4e038ce37a97ad53106011e6737996541df1f685f13876887cbaff7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"21f72447c63a94fac333f62d05ade574dc676c5ebec16f2dfbc03f9014516413\"" Apr 30 00:23:39.761868 containerd[1488]: time="2025-04-30T00:23:39.761795198Z" level=info msg="StartContainer for \"21f72447c63a94fac333f62d05ade574dc676c5ebec16f2dfbc03f9014516413\"" Apr 30 00:23:39.807646 systemd[1]: Started cri-containerd-21f72447c63a94fac333f62d05ade574dc676c5ebec16f2dfbc03f9014516413.scope - libcontainer container 21f72447c63a94fac333f62d05ade574dc676c5ebec16f2dfbc03f9014516413. Apr 30 00:23:39.851045 systemd[1]: cri-containerd-21f72447c63a94fac333f62d05ade574dc676c5ebec16f2dfbc03f9014516413.scope: Deactivated successfully. Apr 30 00:23:39.914135 containerd[1488]: time="2025-04-30T00:23:39.914065985Z" level=info msg="StartContainer for \"21f72447c63a94fac333f62d05ade574dc676c5ebec16f2dfbc03f9014516413\" returns successfully" Apr 30 00:23:40.070523 containerd[1488]: time="2025-04-30T00:23:40.070421488Z" level=info msg="shim disconnected" id=21f72447c63a94fac333f62d05ade574dc676c5ebec16f2dfbc03f9014516413 namespace=k8s.io Apr 30 00:23:40.070523 containerd[1488]: time="2025-04-30T00:23:40.070520836Z" level=warning msg="cleaning up after shim disconnected" id=21f72447c63a94fac333f62d05ade574dc676c5ebec16f2dfbc03f9014516413 namespace=k8s.io Apr 30 00:23:40.070523 containerd[1488]: time="2025-04-30T00:23:40.070531626Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:23:40.578615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21f72447c63a94fac333f62d05ade574dc676c5ebec16f2dfbc03f9014516413-rootfs.mount: Deactivated successfully. Apr 30 00:23:40.794484 kubelet[2620]: E0430 00:23:40.794414 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:23:40.802291 containerd[1488]: time="2025-04-30T00:23:40.798011239Z" level=info msg="CreateContainer within sandbox \"9a564fcdc4e038ce37a97ad53106011e6737996541df1f685f13876887cbaff7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 00:23:40.843971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1867096328.mount: Deactivated successfully. Apr 30 00:23:40.871824 containerd[1488]: time="2025-04-30T00:23:40.871740291Z" level=info msg="CreateContainer within sandbox \"9a564fcdc4e038ce37a97ad53106011e6737996541df1f685f13876887cbaff7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5ed784be7f93620be25fafff238318ddd06766f44b7f35ce0273152d031ae3ef\"" Apr 30 00:23:40.876471 containerd[1488]: time="2025-04-30T00:23:40.875704016Z" level=info msg="StartContainer for \"5ed784be7f93620be25fafff238318ddd06766f44b7f35ce0273152d031ae3ef\"" Apr 30 00:23:40.980597 systemd[1]: Started cri-containerd-5ed784be7f93620be25fafff238318ddd06766f44b7f35ce0273152d031ae3ef.scope - libcontainer container 5ed784be7f93620be25fafff238318ddd06766f44b7f35ce0273152d031ae3ef. Apr 30 00:23:41.072671 systemd[1]: cri-containerd-5ed784be7f93620be25fafff238318ddd06766f44b7f35ce0273152d031ae3ef.scope: Deactivated successfully. Apr 30 00:23:41.077526 containerd[1488]: time="2025-04-30T00:23:41.074371372Z" level=info msg="StartContainer for \"5ed784be7f93620be25fafff238318ddd06766f44b7f35ce0273152d031ae3ef\" returns successfully" Apr 30 00:23:41.200267 containerd[1488]: time="2025-04-30T00:23:41.197217475Z" level=info msg="shim disconnected" id=5ed784be7f93620be25fafff238318ddd06766f44b7f35ce0273152d031ae3ef namespace=k8s.io Apr 30 00:23:41.200267 containerd[1488]: time="2025-04-30T00:23:41.197291755Z" level=warning msg="cleaning up after shim disconnected" id=5ed784be7f93620be25fafff238318ddd06766f44b7f35ce0273152d031ae3ef namespace=k8s.io Apr 30 00:23:41.200267 containerd[1488]: time="2025-04-30T00:23:41.197302385Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:23:41.578537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ed784be7f93620be25fafff238318ddd06766f44b7f35ce0273152d031ae3ef-rootfs.mount: Deactivated successfully. Apr 30 00:23:41.797724 kubelet[2620]: E0430 00:23:41.797463 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:23:41.799394 containerd[1488]: time="2025-04-30T00:23:41.799348263Z" level=info msg="CreateContainer within sandbox \"9a564fcdc4e038ce37a97ad53106011e6737996541df1f685f13876887cbaff7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 00:23:41.982188 containerd[1488]: time="2025-04-30T00:23:41.981979148Z" level=info msg="CreateContainer within sandbox \"9a564fcdc4e038ce37a97ad53106011e6737996541df1f685f13876887cbaff7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"98e1e61ff32d57ad2fbcee963c15dbe5c7f164af7b6f8ee14af0132c4a12e7b1\"" Apr 30 00:23:41.983023 containerd[1488]: time="2025-04-30T00:23:41.982981438Z" level=info msg="StartContainer for \"98e1e61ff32d57ad2fbcee963c15dbe5c7f164af7b6f8ee14af0132c4a12e7b1\"" Apr 30 00:23:42.019770 systemd[1]: Started cri-containerd-98e1e61ff32d57ad2fbcee963c15dbe5c7f164af7b6f8ee14af0132c4a12e7b1.scope - libcontainer container 98e1e61ff32d57ad2fbcee963c15dbe5c7f164af7b6f8ee14af0132c4a12e7b1. Apr 30 00:23:42.046684 systemd[1]: cri-containerd-98e1e61ff32d57ad2fbcee963c15dbe5c7f164af7b6f8ee14af0132c4a12e7b1.scope: Deactivated successfully. Apr 30 00:23:42.076054 containerd[1488]: time="2025-04-30T00:23:42.075980223Z" level=info msg="StartContainer for \"98e1e61ff32d57ad2fbcee963c15dbe5c7f164af7b6f8ee14af0132c4a12e7b1\" returns successfully" Apr 30 00:23:42.183600 containerd[1488]: time="2025-04-30T00:23:42.183523760Z" level=info msg="shim disconnected" id=98e1e61ff32d57ad2fbcee963c15dbe5c7f164af7b6f8ee14af0132c4a12e7b1 namespace=k8s.io Apr 30 00:23:42.183600 containerd[1488]: time="2025-04-30T00:23:42.183588373Z" level=warning msg="cleaning up after shim disconnected" id=98e1e61ff32d57ad2fbcee963c15dbe5c7f164af7b6f8ee14af0132c4a12e7b1 namespace=k8s.io Apr 30 00:23:42.183600 containerd[1488]: time="2025-04-30T00:23:42.183600676Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:23:42.577978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98e1e61ff32d57ad2fbcee963c15dbe5c7f164af7b6f8ee14af0132c4a12e7b1-rootfs.mount: Deactivated successfully. Apr 30 00:23:42.801879 kubelet[2620]: E0430 00:23:42.801844 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:23:42.803753 containerd[1488]: time="2025-04-30T00:23:42.803678417Z" level=info msg="CreateContainer within sandbox \"9a564fcdc4e038ce37a97ad53106011e6737996541df1f685f13876887cbaff7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 00:23:42.925660 containerd[1488]: time="2025-04-30T00:23:42.925489917Z" level=info msg="CreateContainer within sandbox \"9a564fcdc4e038ce37a97ad53106011e6737996541df1f685f13876887cbaff7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"491f30ddf189e5398665c33cb49d93f0600e563e72409ced51aae2ce2aa7df23\"" Apr 30 00:23:42.926334 containerd[1488]: time="2025-04-30T00:23:42.926242286Z" level=info msg="StartContainer for \"491f30ddf189e5398665c33cb49d93f0600e563e72409ced51aae2ce2aa7df23\"" Apr 30 00:23:42.964631 systemd[1]: Started cri-containerd-491f30ddf189e5398665c33cb49d93f0600e563e72409ced51aae2ce2aa7df23.scope - libcontainer container 491f30ddf189e5398665c33cb49d93f0600e563e72409ced51aae2ce2aa7df23. Apr 30 00:23:42.998912 containerd[1488]: time="2025-04-30T00:23:42.998860420Z" level=info msg="StartContainer for \"491f30ddf189e5398665c33cb49d93f0600e563e72409ced51aae2ce2aa7df23\" returns successfully" Apr 30 00:23:43.488488 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 30 00:23:43.805489 kubelet[2620]: E0430 00:23:43.805466 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:23:43.817860 kubelet[2620]: I0430 00:23:43.817796 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tmwnx" podStartSLOduration=5.817775441 podStartE2EDuration="5.817775441s" podCreationTimestamp="2025-04-30 00:23:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:23:43.817525551 +0000 UTC m=+129.476816686" watchObservedRunningTime="2025-04-30 00:23:43.817775441 +0000 UTC m=+129.477066547" Apr 30 00:23:44.807836 kubelet[2620]: E0430 00:23:44.807784 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:23:46.723939 systemd-networkd[1421]: lxc_health: Link UP Apr 30 00:23:46.734891 systemd-networkd[1421]: lxc_health: Gained carrier Apr 30 00:23:46.747291 kubelet[2620]: E0430 00:23:46.747237 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:23:46.814026 kubelet[2620]: E0430 00:23:46.811610 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:23:47.813078 kubelet[2620]: E0430 00:23:47.813025 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:23:48.045640 systemd-networkd[1421]: lxc_health: Gained IPv6LL Apr 30 00:23:51.737994 sshd[4523]: Connection closed by 10.0.0.1 port 36946 Apr 30 00:23:51.738528 sshd-session[4521]: pam_unix(sshd:session): session closed for user core Apr 30 00:23:51.742563 systemd[1]: sshd@33-10.0.0.119:22-10.0.0.1:36946.service: Deactivated successfully. Apr 30 00:23:51.744541 systemd[1]: session-34.scope: Deactivated successfully. Apr 30 00:23:51.745166 systemd-logind[1472]: Session 34 logged out. Waiting for processes to exit. Apr 30 00:23:51.746213 systemd-logind[1472]: Removed session 34.