May 8 00:44:56.926289 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:54:21 -00 2025 May 8 00:44:56.926318 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:44:56.926329 kernel: BIOS-provided physical RAM map: May 8 00:44:56.926336 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 8 00:44:56.926342 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 8 00:44:56.926348 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 8 00:44:56.926355 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 8 00:44:56.926361 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 8 00:44:56.926367 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable May 8 00:44:56.926373 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 8 00:44:56.926382 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable May 8 00:44:56.926388 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved May 8 00:44:56.926394 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 May 8 00:44:56.926400 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved May 8 00:44:56.926408 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 8 00:44:56.926415 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 8 00:44:56.926424 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 8 00:44:56.926430 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 8 00:44:56.926437 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 8 00:44:56.926444 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 8 00:44:56.926450 kernel: NX (Execute Disable) protection: active May 8 00:44:56.926457 kernel: APIC: Static calls initialized May 8 00:44:56.926463 kernel: efi: EFI v2.7 by EDK II May 8 00:44:56.926470 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 May 8 00:44:56.926477 kernel: SMBIOS 2.8 present. May 8 00:44:56.926483 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 May 8 00:44:56.926490 kernel: Hypervisor detected: KVM May 8 00:44:56.926499 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 8 00:44:56.926505 kernel: kvm-clock: using sched offset of 4342648587 cycles May 8 00:44:56.926512 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 8 00:44:56.926519 kernel: tsc: Detected 2794.748 MHz processor May 8 00:44:56.926526 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:44:56.926534 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:44:56.926540 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 May 8 00:44:56.926547 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 8 00:44:56.926554 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:44:56.926563 kernel: Using GB pages for direct mapping May 8 00:44:56.926570 kernel: Secure boot disabled May 8 00:44:56.926576 kernel: ACPI: Early table checksum verification disabled May 8 00:44:56.926583 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 8 00:44:56.926610 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 8 00:44:56.926625 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:44:56.926633 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:44:56.926671 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 8 00:44:56.926678 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:44:56.926685 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:44:56.926692 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:44:56.926704 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:44:56.926711 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 8 00:44:56.926718 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 8 00:44:56.926728 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 8 00:44:56.926735 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 8 00:44:56.926742 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 8 00:44:56.926749 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 8 00:44:56.926756 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 8 00:44:56.926763 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 8 00:44:56.926771 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 8 00:44:56.926777 kernel: No NUMA configuration found May 8 00:44:56.926785 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] May 8 00:44:56.926792 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] May 8 00:44:56.926801 kernel: Zone ranges: May 8 00:44:56.926808 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:44:56.926815 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] May 8 00:44:56.926822 kernel: Normal empty May 8 00:44:56.926829 kernel: Movable zone start for each node May 8 00:44:56.926836 kernel: Early memory node ranges May 8 00:44:56.926843 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 8 00:44:56.926850 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 8 00:44:56.926857 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 8 00:44:56.926867 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] May 8 00:44:56.926874 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] May 8 00:44:56.926881 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] May 8 00:44:56.926888 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] May 8 00:44:56.926895 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:44:56.926902 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 8 00:44:56.926909 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 8 00:44:56.926916 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:44:56.926923 kernel: On node 0, zone DMA: 240 pages in unavailable ranges May 8 00:44:56.926932 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 8 00:44:56.926940 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges May 8 00:44:56.926947 kernel: ACPI: PM-Timer IO Port: 0x608 May 8 00:44:56.926954 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 8 00:44:56.926961 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 8 00:44:56.926968 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 8 00:44:56.926975 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 8 00:44:56.926982 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 8 00:44:56.926989 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 8 00:44:56.926996 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 8 00:44:56.927005 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:44:56.927013 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 8 00:44:56.927020 kernel: TSC deadline timer available May 8 00:44:56.927027 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 8 00:44:56.927034 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 8 00:44:56.927041 kernel: kvm-guest: KVM setup pv remote TLB flush May 8 00:44:56.927048 kernel: kvm-guest: setup PV sched yield May 8 00:44:56.927062 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 8 00:44:56.927069 kernel: Booting paravirtualized kernel on KVM May 8 00:44:56.927079 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:44:56.927086 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 8 00:44:56.927093 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 May 8 00:44:56.927101 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 May 8 00:44:56.927108 kernel: pcpu-alloc: [0] 0 1 2 3 May 8 00:44:56.927115 kernel: kvm-guest: PV spinlocks enabled May 8 00:44:56.927122 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 8 00:44:56.927132 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:44:56.927143 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:44:56.927151 kernel: random: crng init done May 8 00:44:56.927160 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:44:56.927167 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:44:56.927176 kernel: Fallback order for Node 0: 0 May 8 00:44:56.927184 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 May 8 00:44:56.927191 kernel: Policy zone: DMA32 May 8 00:44:56.927198 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:44:56.927205 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42856K init, 2336K bss, 166140K reserved, 0K cma-reserved) May 8 00:44:56.927215 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 00:44:56.927222 kernel: ftrace: allocating 37944 entries in 149 pages May 8 00:44:56.927229 kernel: ftrace: allocated 149 pages with 4 groups May 8 00:44:56.927236 kernel: Dynamic Preempt: voluntary May 8 00:44:56.927251 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:44:56.927261 kernel: rcu: RCU event tracing is enabled. May 8 00:44:56.927269 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 00:44:56.927277 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:44:56.927284 kernel: Rude variant of Tasks RCU enabled. May 8 00:44:56.927292 kernel: Tracing variant of Tasks RCU enabled. May 8 00:44:56.927299 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:44:56.927306 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 00:44:56.927316 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 8 00:44:56.927324 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 00:44:56.927331 kernel: Console: colour dummy device 80x25 May 8 00:44:56.927339 kernel: printk: console [ttyS0] enabled May 8 00:44:56.927346 kernel: ACPI: Core revision 20230628 May 8 00:44:56.927356 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 8 00:44:56.927364 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:44:56.927371 kernel: x2apic enabled May 8 00:44:56.927379 kernel: APIC: Switched APIC routing to: physical x2apic May 8 00:44:56.927386 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 8 00:44:56.927394 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 8 00:44:56.927401 kernel: kvm-guest: setup PV IPIs May 8 00:44:56.927409 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:44:56.927416 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 8 00:44:56.927426 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 8 00:44:56.927434 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 8 00:44:56.927441 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 8 00:44:56.927449 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 8 00:44:56.927456 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:44:56.927464 kernel: Spectre V2 : Mitigation: Retpolines May 8 00:44:56.927471 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:44:56.927479 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 8 00:44:56.927486 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 8 00:44:56.927496 kernel: RETBleed: Mitigation: untrained return thunk May 8 00:44:56.927504 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:44:56.927511 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 8 00:44:56.927519 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 8 00:44:56.927527 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 8 00:44:56.927535 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 8 00:44:56.927542 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:44:56.927550 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:44:56.927559 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:44:56.927567 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:44:56.927574 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 8 00:44:56.927582 kernel: Freeing SMP alternatives memory: 32K May 8 00:44:56.927589 kernel: pid_max: default: 32768 minimum: 301 May 8 00:44:56.927597 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:44:56.927604 kernel: landlock: Up and running. May 8 00:44:56.927612 kernel: SELinux: Initializing. May 8 00:44:56.927619 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:44:56.927629 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:44:56.927637 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 8 00:44:56.927662 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:44:56.927669 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:44:56.927677 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:44:56.927685 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 8 00:44:56.927692 kernel: ... version: 0 May 8 00:44:56.927699 kernel: ... bit width: 48 May 8 00:44:56.927707 kernel: ... generic registers: 6 May 8 00:44:56.927717 kernel: ... value mask: 0000ffffffffffff May 8 00:44:56.927724 kernel: ... max period: 00007fffffffffff May 8 00:44:56.927732 kernel: ... fixed-purpose events: 0 May 8 00:44:56.927739 kernel: ... event mask: 000000000000003f May 8 00:44:56.927747 kernel: signal: max sigframe size: 1776 May 8 00:44:56.927754 kernel: rcu: Hierarchical SRCU implementation. May 8 00:44:56.927762 kernel: rcu: Max phase no-delay instances is 400. May 8 00:44:56.927769 kernel: smp: Bringing up secondary CPUs ... May 8 00:44:56.927777 kernel: smpboot: x86: Booting SMP configuration: May 8 00:44:56.927786 kernel: .... node #0, CPUs: #1 #2 #3 May 8 00:44:56.927794 kernel: smp: Brought up 1 node, 4 CPUs May 8 00:44:56.927801 kernel: smpboot: Max logical packages: 1 May 8 00:44:56.927809 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 8 00:44:56.927816 kernel: devtmpfs: initialized May 8 00:44:56.927824 kernel: x86/mm: Memory block size: 128MB May 8 00:44:56.927831 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 8 00:44:56.927839 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 8 00:44:56.927847 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) May 8 00:44:56.927857 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 8 00:44:56.927864 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 8 00:44:56.927872 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:44:56.927880 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 00:44:56.927887 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:44:56.927895 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:44:56.927902 kernel: audit: initializing netlink subsys (disabled) May 8 00:44:56.927910 kernel: audit: type=2000 audit(1746665096.036:1): state=initialized audit_enabled=0 res=1 May 8 00:44:56.927917 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:44:56.927927 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:44:56.927934 kernel: cpuidle: using governor menu May 8 00:44:56.927942 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:44:56.927949 kernel: dca service started, version 1.12.1 May 8 00:44:56.927957 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 8 00:44:56.927965 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 8 00:44:56.927972 kernel: PCI: Using configuration type 1 for base access May 8 00:44:56.927980 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:44:56.927989 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:44:56.928002 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:44:56.928013 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:44:56.928024 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:44:56.928034 kernel: ACPI: Added _OSI(Module Device) May 8 00:44:56.928045 kernel: ACPI: Added _OSI(Processor Device) May 8 00:44:56.928060 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:44:56.928068 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:44:56.928075 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:44:56.928083 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 8 00:44:56.928093 kernel: ACPI: Interpreter enabled May 8 00:44:56.928101 kernel: ACPI: PM: (supports S0 S3 S5) May 8 00:44:56.928109 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:44:56.928116 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:44:56.928124 kernel: PCI: Using E820 reservations for host bridge windows May 8 00:44:56.928131 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 8 00:44:56.928139 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:44:56.928353 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:44:56.928486 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 8 00:44:56.928608 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 8 00:44:56.928617 kernel: PCI host bridge to bus 0000:00 May 8 00:44:56.928794 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:44:56.928906 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 8 00:44:56.929015 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:44:56.929155 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 8 00:44:56.929281 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 8 00:44:56.929390 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] May 8 00:44:56.929500 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:44:56.929666 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 8 00:44:56.929807 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 8 00:44:56.929928 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 8 00:44:56.930063 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 8 00:44:56.930185 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 8 00:44:56.930335 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 8 00:44:56.930459 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:44:56.930618 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 8 00:44:56.930793 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 8 00:44:56.930918 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 8 00:44:56.931045 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] May 8 00:44:56.931201 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 8 00:44:56.931324 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 8 00:44:56.931445 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 8 00:44:56.931567 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] May 8 00:44:56.931717 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 8 00:44:56.931841 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 8 00:44:56.931964 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 8 00:44:56.932096 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] May 8 00:44:56.932216 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 8 00:44:56.932358 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 8 00:44:56.932491 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 8 00:44:56.932629 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 8 00:44:56.932768 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 8 00:44:56.932892 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 8 00:44:56.933030 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 8 00:44:56.933165 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 8 00:44:56.933177 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 8 00:44:56.933186 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 8 00:44:56.933195 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:44:56.933202 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 8 00:44:56.933214 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 8 00:44:56.933221 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 8 00:44:56.933229 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 8 00:44:56.933236 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 8 00:44:56.933244 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 8 00:44:56.933251 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 8 00:44:56.933259 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 8 00:44:56.933267 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 8 00:44:56.933274 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 8 00:44:56.933284 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 8 00:44:56.933291 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 8 00:44:56.933299 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 8 00:44:56.933306 kernel: iommu: Default domain type: Translated May 8 00:44:56.933314 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:44:56.933321 kernel: efivars: Registered efivars operations May 8 00:44:56.933329 kernel: PCI: Using ACPI for IRQ routing May 8 00:44:56.933336 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:44:56.933344 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 8 00:44:56.933354 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] May 8 00:44:56.933361 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] May 8 00:44:56.933369 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] May 8 00:44:56.933507 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 8 00:44:56.933670 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 8 00:44:56.933792 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:44:56.933802 kernel: vgaarb: loaded May 8 00:44:56.933810 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 8 00:44:56.933817 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 8 00:44:56.933829 kernel: clocksource: Switched to clocksource kvm-clock May 8 00:44:56.933836 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:44:56.933844 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:44:56.933852 kernel: pnp: PnP ACPI init May 8 00:44:56.933998 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 8 00:44:56.934010 kernel: pnp: PnP ACPI: found 6 devices May 8 00:44:56.934018 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:44:56.934026 kernel: NET: Registered PF_INET protocol family May 8 00:44:56.934037 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:44:56.934044 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:44:56.934061 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:44:56.934068 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:44:56.934076 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 00:44:56.934084 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:44:56.934091 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:44:56.934099 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:44:56.934107 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:44:56.934117 kernel: NET: Registered PF_XDP protocol family May 8 00:44:56.934239 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 8 00:44:56.934375 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 8 00:44:56.934489 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 8 00:44:56.934599 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 8 00:44:56.934733 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 8 00:44:56.934844 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 8 00:44:56.934953 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 8 00:44:56.935076 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] May 8 00:44:56.935087 kernel: PCI: CLS 0 bytes, default 64 May 8 00:44:56.935094 kernel: Initialise system trusted keyrings May 8 00:44:56.935103 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:44:56.935110 kernel: Key type asymmetric registered May 8 00:44:56.935117 kernel: Asymmetric key parser 'x509' registered May 8 00:44:56.935125 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 00:44:56.935133 kernel: io scheduler mq-deadline registered May 8 00:44:56.935142 kernel: io scheduler kyber registered May 8 00:44:56.935154 kernel: io scheduler bfq registered May 8 00:44:56.935161 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:44:56.935171 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 8 00:44:56.935179 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 8 00:44:56.935188 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 8 00:44:56.935197 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:44:56.935204 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:44:56.935212 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 8 00:44:56.935220 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:44:56.935230 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:44:56.935373 kernel: rtc_cmos 00:04: RTC can wake from S4 May 8 00:44:56.935384 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:44:56.935496 kernel: rtc_cmos 00:04: registered as rtc0 May 8 00:44:56.935608 kernel: rtc_cmos 00:04: setting system clock to 2025-05-08T00:44:56 UTC (1746665096) May 8 00:44:56.935808 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 8 00:44:56.935819 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 8 00:44:56.935830 kernel: efifb: probing for efifb May 8 00:44:56.935837 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k May 8 00:44:56.935845 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 May 8 00:44:56.935853 kernel: efifb: scrolling: redraw May 8 00:44:56.935860 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 May 8 00:44:56.935868 kernel: Console: switching to colour frame buffer device 100x37 May 8 00:44:56.935894 kernel: fb0: EFI VGA frame buffer device May 8 00:44:56.935904 kernel: pstore: Using crash dump compression: deflate May 8 00:44:56.935912 kernel: pstore: Registered efi_pstore as persistent store backend May 8 00:44:56.935922 kernel: NET: Registered PF_INET6 protocol family May 8 00:44:56.935930 kernel: Segment Routing with IPv6 May 8 00:44:56.935937 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:44:56.935945 kernel: NET: Registered PF_PACKET protocol family May 8 00:44:56.935953 kernel: Key type dns_resolver registered May 8 00:44:56.935961 kernel: IPI shorthand broadcast: enabled May 8 00:44:56.935969 kernel: sched_clock: Marking stable (1382003136, 129589212)->(1603667071, -92074723) May 8 00:44:56.935977 kernel: registered taskstats version 1 May 8 00:44:56.935984 kernel: Loading compiled-in X.509 certificates May 8 00:44:56.935992 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 75e4e434c57439d3f2eaf7797bbbcdd698dafd0e' May 8 00:44:56.936003 kernel: Key type .fscrypt registered May 8 00:44:56.936010 kernel: Key type fscrypt-provisioning registered May 8 00:44:56.936018 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:44:56.936026 kernel: ima: Allocated hash algorithm: sha1 May 8 00:44:56.936034 kernel: ima: No architecture policies found May 8 00:44:56.936042 kernel: clk: Disabling unused clocks May 8 00:44:56.936049 kernel: Freeing unused kernel image (initmem) memory: 42856K May 8 00:44:56.936065 kernel: Write protecting the kernel read-only data: 36864k May 8 00:44:56.936076 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 8 00:44:56.936084 kernel: Run /init as init process May 8 00:44:56.936091 kernel: with arguments: May 8 00:44:56.936100 kernel: /init May 8 00:44:56.936107 kernel: with environment: May 8 00:44:56.936115 kernel: HOME=/ May 8 00:44:56.936123 kernel: TERM=linux May 8 00:44:56.936131 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:44:56.936141 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:44:56.936153 systemd[1]: Detected virtualization kvm. May 8 00:44:56.936162 systemd[1]: Detected architecture x86-64. May 8 00:44:56.936170 systemd[1]: Running in initrd. May 8 00:44:56.936180 systemd[1]: No hostname configured, using default hostname. May 8 00:44:56.936191 systemd[1]: Hostname set to . May 8 00:44:56.936201 systemd[1]: Initializing machine ID from VM UUID. May 8 00:44:56.936210 systemd[1]: Queued start job for default target initrd.target. May 8 00:44:56.936218 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:44:56.936226 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:44:56.936235 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:44:56.936244 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:44:56.936252 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:44:56.936263 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:44:56.936273 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:44:56.936282 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:44:56.936291 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:44:56.936299 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:44:56.936308 systemd[1]: Reached target paths.target - Path Units. May 8 00:44:56.936316 systemd[1]: Reached target slices.target - Slice Units. May 8 00:44:56.936327 systemd[1]: Reached target swap.target - Swaps. May 8 00:44:56.936335 systemd[1]: Reached target timers.target - Timer Units. May 8 00:44:56.936343 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:44:56.936352 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:44:56.936360 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:44:56.936368 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 00:44:56.936377 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:44:56.936385 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:44:56.936396 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:44:56.936405 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:44:56.936413 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:44:56.936421 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:44:56.936430 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:44:56.936438 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:44:56.936447 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:44:56.936455 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:44:56.936463 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:44:56.936474 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:44:56.936483 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:44:56.936491 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:44:56.936518 systemd-journald[194]: Collecting audit messages is disabled. May 8 00:44:56.936539 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:44:56.936548 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:44:56.936556 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:44:56.936565 systemd-journald[194]: Journal started May 8 00:44:56.936586 systemd-journald[194]: Runtime Journal (/run/log/journal/e0ae299b953f4348aa9e43f8947929b0) is 6.0M, max 48.3M, 42.2M free. May 8 00:44:56.924612 systemd-modules-load[195]: Inserted module 'overlay' May 8 00:44:56.945668 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:44:56.947690 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:44:56.951668 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:44:56.953659 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:44:56.953981 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:44:56.957172 systemd-modules-load[195]: Inserted module 'br_netfilter' May 8 00:44:56.958441 kernel: Bridge firewalling registered May 8 00:44:56.958661 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:44:56.960279 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:44:56.969420 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:44:56.971787 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:44:56.974629 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:44:56.977417 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:44:56.996852 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:44:56.998538 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:44:57.013037 dracut-cmdline[228]: dracut-dracut-053 May 8 00:44:57.016873 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:44:57.034102 systemd-resolved[230]: Positive Trust Anchors: May 8 00:44:57.034123 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:44:57.034155 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:44:57.036982 systemd-resolved[230]: Defaulting to hostname 'linux'. May 8 00:44:57.038147 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:44:57.049763 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:44:57.147711 kernel: SCSI subsystem initialized May 8 00:44:57.157697 kernel: Loading iSCSI transport class v2.0-870. May 8 00:44:57.168701 kernel: iscsi: registered transport (tcp) May 8 00:44:57.191702 kernel: iscsi: registered transport (qla4xxx) May 8 00:44:57.191818 kernel: QLogic iSCSI HBA Driver May 8 00:44:57.249664 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:44:57.267770 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:44:57.291866 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:44:57.291947 kernel: device-mapper: uevent: version 1.0.3 May 8 00:44:57.291960 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:44:57.338698 kernel: raid6: avx2x4 gen() 27002 MB/s May 8 00:44:57.384685 kernel: raid6: avx2x2 gen() 29812 MB/s May 8 00:44:57.401773 kernel: raid6: avx2x1 gen() 25921 MB/s May 8 00:44:57.401809 kernel: raid6: using algorithm avx2x2 gen() 29812 MB/s May 8 00:44:57.419813 kernel: raid6: .... xor() 19421 MB/s, rmw enabled May 8 00:44:57.419863 kernel: raid6: using avx2x2 recovery algorithm May 8 00:44:57.440669 kernel: xor: automatically using best checksumming function avx May 8 00:44:57.604691 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:44:57.618952 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:44:57.630836 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:44:57.642996 systemd-udevd[414]: Using default interface naming scheme 'v255'. May 8 00:44:57.647980 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:44:57.654804 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:44:57.671412 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation May 8 00:44:57.705546 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:44:57.718869 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:44:57.785690 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:44:57.796850 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:44:57.811059 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:44:57.813102 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:44:57.816652 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:44:57.819098 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:44:57.823668 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:44:57.831894 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:44:57.838069 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:44:57.838121 kernel: AES CTR mode by8 optimization enabled May 8 00:44:57.841748 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 8 00:44:57.860297 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 00:44:57.860452 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:44:57.860465 kernel: GPT:9289727 != 19775487 May 8 00:44:57.860482 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:44:57.860492 kernel: GPT:9289727 != 19775487 May 8 00:44:57.860502 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:44:57.860512 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:44:57.860522 kernel: libata version 3.00 loaded. May 8 00:44:57.846824 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:44:57.863722 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:44:57.864471 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:44:57.868083 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:44:57.872930 kernel: ahci 0000:00:1f.2: version 3.0 May 8 00:44:57.911721 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 8 00:44:57.911745 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 8 00:44:57.911904 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 8 00:44:57.912056 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (474) May 8 00:44:57.912068 kernel: BTRFS: device fsid 28014d97-e6d7-4db4-b1d9-76a980e09972 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (473) May 8 00:44:57.912079 kernel: scsi host0: ahci May 8 00:44:57.912230 kernel: scsi host1: ahci May 8 00:44:57.912372 kernel: scsi host2: ahci May 8 00:44:57.912523 kernel: scsi host3: ahci May 8 00:44:57.912867 kernel: scsi host4: ahci May 8 00:44:57.913063 kernel: scsi host5: ahci May 8 00:44:57.913208 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 8 00:44:57.913219 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 8 00:44:57.913230 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 8 00:44:57.913240 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 8 00:44:57.913254 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 8 00:44:57.913265 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 8 00:44:57.869300 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:44:57.869916 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:44:57.875394 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:44:57.889004 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:44:57.901926 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 8 00:44:57.906706 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:44:57.925808 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 8 00:44:57.938599 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:44:57.948499 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 8 00:44:57.951818 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 8 00:44:57.964782 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:44:57.967835 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:44:57.990900 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:44:58.116723 disk-uuid[567]: Primary Header is updated. May 8 00:44:58.116723 disk-uuid[567]: Secondary Entries is updated. May 8 00:44:58.116723 disk-uuid[567]: Secondary Header is updated. May 8 00:44:58.121689 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:44:58.126679 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:44:58.221673 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 8 00:44:58.221729 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 8 00:44:58.222672 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 8 00:44:58.223675 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 8 00:44:58.224674 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 8 00:44:58.226203 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 8 00:44:58.226221 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 8 00:44:58.226232 kernel: ata3.00: applying bridge limits May 8 00:44:58.227665 kernel: ata3.00: configured for UDMA/100 May 8 00:44:58.227681 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 8 00:44:58.267671 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 8 00:44:58.281388 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 8 00:44:58.281402 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 8 00:44:59.141685 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:44:59.141935 disk-uuid[577]: The operation has completed successfully. May 8 00:44:59.170172 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:44:59.170292 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:44:59.199785 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:44:59.205201 sh[592]: Success May 8 00:44:59.218697 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 8 00:44:59.249295 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:44:59.263046 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:44:59.268254 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:44:59.277873 kernel: BTRFS info (device dm-0): first mount of filesystem 28014d97-e6d7-4db4-b1d9-76a980e09972 May 8 00:44:59.277901 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 00:44:59.277912 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:44:59.278916 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:44:59.280655 kernel: BTRFS info (device dm-0): using free space tree May 8 00:44:59.284157 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:44:59.286524 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 00:44:59.299752 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:44:59.302439 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:44:59.311139 kernel: BTRFS info (device vda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:44:59.311177 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:44:59.311192 kernel: BTRFS info (device vda6): using free space tree May 8 00:44:59.315661 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:44:59.324252 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:44:59.326133 kernel: BTRFS info (device vda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:44:59.402287 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:44:59.418901 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:44:59.441054 systemd-networkd[770]: lo: Link UP May 8 00:44:59.441062 systemd-networkd[770]: lo: Gained carrier May 8 00:44:59.442576 systemd-networkd[770]: Enumeration completed May 8 00:44:59.442687 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:44:59.443075 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:44:59.443079 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:44:59.443315 systemd[1]: Reached target network.target - Network. May 8 00:44:59.444179 systemd-networkd[770]: eth0: Link UP May 8 00:44:59.444183 systemd-networkd[770]: eth0: Gained carrier May 8 00:44:59.444190 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:44:59.465700 systemd-networkd[770]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:44:59.587139 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:44:59.608809 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:44:59.651493 ignition[775]: Ignition 2.19.0 May 8 00:44:59.651507 ignition[775]: Stage: fetch-offline May 8 00:44:59.651551 ignition[775]: no configs at "/usr/lib/ignition/base.d" May 8 00:44:59.651563 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:44:59.651711 ignition[775]: parsed url from cmdline: "" May 8 00:44:59.651716 ignition[775]: no config URL provided May 8 00:44:59.651723 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:44:59.651734 ignition[775]: no config at "/usr/lib/ignition/user.ign" May 8 00:44:59.651761 ignition[775]: op(1): [started] loading QEMU firmware config module May 8 00:44:59.651767 ignition[775]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 00:44:59.657525 ignition[775]: op(1): [finished] loading QEMU firmware config module May 8 00:44:59.697461 ignition[775]: parsing config with SHA512: 84f007992c3fcae0b250e65c54cfaed26a882ed3c0c4d82da681395ebfe876d497e1fbb448b331bc90a6629c935c3b2c6c045e6c68a45fcec296f62edf957297 May 8 00:44:59.701114 unknown[775]: fetched base config from "system" May 8 00:44:59.701136 unknown[775]: fetched user config from "qemu" May 8 00:44:59.701622 ignition[775]: fetch-offline: fetch-offline passed May 8 00:44:59.703686 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:44:59.701720 ignition[775]: Ignition finished successfully May 8 00:44:59.717126 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:44:59.723869 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:44:59.735371 ignition[784]: Ignition 2.19.0 May 8 00:44:59.735381 ignition[784]: Stage: kargs May 8 00:44:59.735536 ignition[784]: no configs at "/usr/lib/ignition/base.d" May 8 00:44:59.735547 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:44:59.736444 ignition[784]: kargs: kargs passed May 8 00:44:59.736486 ignition[784]: Ignition finished successfully May 8 00:44:59.739687 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:44:59.758009 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:44:59.843311 ignition[792]: Ignition 2.19.0 May 8 00:44:59.843322 ignition[792]: Stage: disks May 8 00:44:59.843510 ignition[792]: no configs at "/usr/lib/ignition/base.d" May 8 00:44:59.843522 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:44:59.846678 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:44:59.844589 ignition[792]: disks: disks passed May 8 00:44:59.849741 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:44:59.844654 ignition[792]: Ignition finished successfully May 8 00:44:59.851341 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:44:59.853543 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:44:59.854025 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:44:59.854411 systemd[1]: Reached target basic.target - Basic System. May 8 00:44:59.863850 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:44:59.881818 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 00:44:59.895186 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:44:59.903876 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:44:59.999687 kernel: EXT4-fs (vda9): mounted filesystem 36960c89-ba45-4808-a41c-bf61ce9470a3 r/w with ordered data mode. Quota mode: none. May 8 00:45:00.000351 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:45:00.001543 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:45:00.012784 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:45:00.014694 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:45:00.015214 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:45:00.015256 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:45:00.049401 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (810) May 8 00:45:00.015279 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:45:00.053362 kernel: BTRFS info (device vda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:45:00.053398 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:45:00.053413 kernel: BTRFS info (device vda6): using free space tree May 8 00:45:00.055679 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:45:00.057968 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:45:00.087014 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:45:00.090080 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:45:00.132970 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:45:00.138850 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory May 8 00:45:00.143850 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:45:00.149329 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:45:00.235457 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:45:00.262949 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:45:00.266339 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:45:00.276557 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:45:00.327158 kernel: BTRFS info (device vda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:45:00.348486 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:45:00.370091 ignition[928]: INFO : Ignition 2.19.0 May 8 00:45:00.370091 ignition[928]: INFO : Stage: mount May 8 00:45:00.424397 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:45:00.424397 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:45:00.426610 ignition[928]: INFO : mount: mount passed May 8 00:45:00.426610 ignition[928]: INFO : Ignition finished successfully May 8 00:45:00.430089 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:45:00.438837 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:45:00.451819 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:45:00.462674 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (938) May 8 00:45:00.465226 kernel: BTRFS info (device vda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:45:00.465253 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:45:00.465264 kernel: BTRFS info (device vda6): using free space tree May 8 00:45:00.468671 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:45:00.470869 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:45:00.537605 ignition[955]: INFO : Ignition 2.19.0 May 8 00:45:00.537605 ignition[955]: INFO : Stage: files May 8 00:45:00.539633 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:45:00.539633 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:45:00.539633 ignition[955]: DEBUG : files: compiled without relabeling support, skipping May 8 00:45:00.543767 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:45:00.543767 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:45:00.549127 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:45:00.550838 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:45:00.552507 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:45:00.551314 unknown[955]: wrote ssh authorized keys file for user: core May 8 00:45:00.555433 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 8 00:45:00.555433 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 8 00:45:00.652473 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:45:00.653763 systemd-networkd[770]: eth0: Gained IPv6LL May 8 00:45:01.115545 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 8 00:45:01.115545 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:45:01.129439 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 8 00:45:01.593731 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 00:45:01.713343 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:45:01.713343 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 8 00:45:01.717146 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:45:01.717146 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:45:01.720673 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:45:01.720673 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:45:01.724187 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:45:01.724187 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:45:01.727792 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:45:01.729827 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:45:01.731757 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:45:01.733619 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:45:01.736182 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:45:01.738677 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:45:01.740793 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 8 00:45:02.048216 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 8 00:45:02.564608 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:45:02.564608 ignition[955]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 8 00:45:02.568401 ignition[955]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:45:02.570781 ignition[955]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:45:02.570781 ignition[955]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 8 00:45:02.570781 ignition[955]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 8 00:45:02.575711 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:45:02.578067 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:45:02.578067 ignition[955]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 8 00:45:02.578067 ignition[955]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:45:02.622973 ignition[955]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:45:02.630071 ignition[955]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:45:02.631850 ignition[955]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:45:02.631850 ignition[955]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 8 00:45:02.634801 ignition[955]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:45:02.636309 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:45:02.638263 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:45:02.640106 ignition[955]: INFO : files: files passed May 8 00:45:02.640906 ignition[955]: INFO : Ignition finished successfully May 8 00:45:02.644565 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:45:02.653846 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:45:02.656999 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:45:02.659743 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:45:02.660795 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:45:02.667483 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory May 8 00:45:02.671519 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:45:02.671519 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:45:02.676246 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:45:02.674464 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:45:02.676436 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:45:02.681956 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:45:02.709528 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:45:02.710572 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:45:02.713200 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:45:02.715253 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:45:02.717263 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:45:02.728796 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:45:02.742551 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:45:02.748808 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:45:02.760386 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:45:02.761694 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:45:02.763933 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:45:02.765912 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:45:02.766054 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:45:02.768182 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:45:02.769929 systemd[1]: Stopped target basic.target - Basic System. May 8 00:45:02.771951 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:45:02.773971 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:45:02.775990 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:45:02.778138 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:45:02.780234 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:45:02.782747 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:45:02.785093 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:45:02.787355 systemd[1]: Stopped target swap.target - Swaps. May 8 00:45:02.789509 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:45:02.789713 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:45:02.792222 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:45:02.794196 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:45:02.796302 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:45:02.796500 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:45:02.798827 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:45:02.799044 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:45:02.801588 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:45:02.801829 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:45:02.803829 systemd[1]: Stopped target paths.target - Path Units. May 8 00:45:02.805820 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:45:02.806078 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:45:02.808754 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:45:02.810843 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:45:02.812942 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:45:02.813115 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:45:02.815028 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:45:02.815118 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:45:02.817198 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:45:02.817392 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:45:02.819629 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:45:02.819752 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:45:02.832936 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:45:02.834827 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:45:02.835006 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:45:02.838865 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:45:02.840800 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:45:02.840976 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:45:02.843245 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:45:02.843382 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:45:02.851922 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:45:02.853137 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:45:02.870546 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:45:02.874980 ignition[1010]: INFO : Ignition 2.19.0 May 8 00:45:02.874980 ignition[1010]: INFO : Stage: umount May 8 00:45:02.876794 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:45:02.876794 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:45:02.880011 ignition[1010]: INFO : umount: umount passed May 8 00:45:02.880858 ignition[1010]: INFO : Ignition finished successfully May 8 00:45:02.883228 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:45:02.884323 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:45:02.886555 systemd[1]: Stopped target network.target - Network. May 8 00:45:02.888442 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:45:02.888500 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:45:02.891761 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:45:02.892889 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:45:02.895140 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:45:02.895196 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:45:02.898189 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:45:02.899195 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:45:02.901459 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:45:02.903688 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:45:02.908690 systemd-networkd[770]: eth0: DHCPv6 lease lost May 8 00:45:02.910723 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:45:02.911967 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:45:02.935386 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:45:02.936450 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:45:02.941611 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:45:02.941681 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:45:02.956759 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:45:02.966725 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:45:02.966785 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:45:02.970870 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:45:02.972029 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:45:02.974509 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:45:02.975699 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:45:02.978420 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:45:02.978497 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:45:02.982416 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:45:02.993520 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:45:02.993702 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:45:03.007881 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:45:03.008084 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:45:03.010320 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:45:03.010369 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:45:03.011929 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:45:03.011970 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:45:03.013929 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:45:03.013981 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:45:03.017472 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:45:03.017521 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:45:03.019800 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:45:03.019849 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:45:03.033959 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:45:03.043608 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:45:03.044952 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:45:03.048073 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:45:03.049311 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:45:03.052297 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:45:03.053633 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:45:03.210128 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:45:03.211225 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:45:03.213723 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:45:03.215847 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:45:03.216917 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:45:03.225781 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:45:03.235413 systemd[1]: Switching root. May 8 00:45:03.269203 systemd-journald[194]: Journal stopped May 8 00:45:04.878584 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). May 8 00:45:04.878696 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:45:04.878712 kernel: SELinux: policy capability open_perms=1 May 8 00:45:04.878724 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:45:04.878735 kernel: SELinux: policy capability always_check_network=0 May 8 00:45:04.878755 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:45:04.878771 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:45:04.878786 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:45:04.878798 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:45:04.878813 kernel: audit: type=1403 audit(1746665104.123:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:45:04.878826 systemd[1]: Successfully loaded SELinux policy in 45.410ms. May 8 00:45:04.878843 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.331ms. May 8 00:45:04.878856 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:45:04.878876 systemd[1]: Detected virtualization kvm. May 8 00:45:04.878891 systemd[1]: Detected architecture x86-64. May 8 00:45:04.878903 systemd[1]: Detected first boot. May 8 00:45:04.878914 systemd[1]: Initializing machine ID from VM UUID. May 8 00:45:04.878926 zram_generator::config[1053]: No configuration found. May 8 00:45:04.878940 systemd[1]: Populated /etc with preset unit settings. May 8 00:45:04.878953 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:45:04.878965 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:45:04.878977 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:45:04.878994 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:45:04.879009 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:45:04.879024 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:45:04.879039 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:45:04.879054 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:45:04.879074 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:45:04.879089 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:45:04.879101 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:45:04.879113 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:45:04.879127 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:45:04.879139 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:45:04.879151 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:45:04.879164 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:45:04.879176 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:45:04.879188 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 8 00:45:04.879200 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:45:04.879212 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:45:04.879224 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:45:04.879239 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:45:04.879251 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:45:04.879263 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:45:04.879275 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:45:04.879287 systemd[1]: Reached target slices.target - Slice Units. May 8 00:45:04.879299 systemd[1]: Reached target swap.target - Swaps. May 8 00:45:04.879311 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:45:04.879322 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:45:04.879337 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:45:04.879349 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:45:04.879361 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:45:04.879373 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:45:04.879385 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:45:04.879396 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:45:04.879408 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:45:04.879420 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:45:04.879435 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:45:04.879447 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:45:04.879458 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:45:04.879471 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:45:04.879483 systemd[1]: Reached target machines.target - Containers. May 8 00:45:04.879496 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:45:04.879508 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:45:04.879520 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:45:04.879532 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:45:04.879547 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:45:04.879559 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:45:04.879570 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:45:04.879582 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:45:04.879594 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:45:04.879607 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:45:04.879618 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:45:04.879631 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:45:04.879657 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:45:04.879669 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:45:04.879680 kernel: fuse: init (API version 7.39) May 8 00:45:04.879691 kernel: loop: module loaded May 8 00:45:04.879703 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:45:04.879715 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:45:04.879726 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:45:04.879738 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:45:04.879767 systemd-journald[1123]: Collecting audit messages is disabled. May 8 00:45:04.879800 systemd-journald[1123]: Journal started May 8 00:45:04.879821 systemd-journald[1123]: Runtime Journal (/run/log/journal/e0ae299b953f4348aa9e43f8947929b0) is 6.0M, max 48.3M, 42.2M free. May 8 00:45:04.653483 systemd[1]: Queued start job for default target multi-user.target. May 8 00:45:04.676808 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 8 00:45:04.677262 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:45:04.882668 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:45:04.884796 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:45:04.884822 systemd[1]: Stopped verity-setup.service. May 8 00:45:04.889671 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:45:04.892664 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:45:04.894959 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:45:04.896573 kernel: ACPI: bus type drm_connector registered May 8 00:45:04.896721 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:45:04.897958 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:45:04.899037 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:45:04.900242 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:45:04.901517 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:45:04.902815 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:45:04.904321 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:45:04.905877 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:45:04.906071 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:45:04.907536 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:45:04.907720 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:45:04.909154 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:45:04.909324 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:45:04.910679 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:45:04.910846 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:45:04.912366 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:45:04.912532 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:45:04.913916 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:45:04.914082 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:45:04.915451 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:45:04.916923 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:45:04.918690 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:45:04.935392 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:45:04.941857 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:45:04.944631 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:45:04.945932 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:45:04.945967 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:45:04.948169 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 8 00:45:04.950598 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:45:04.952933 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:45:04.954177 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:45:04.957767 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:45:04.961374 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:45:04.962583 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:45:04.966050 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:45:04.966567 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:45:04.971919 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:45:04.977682 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:45:04.986826 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:45:04.990705 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:45:04.992450 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:45:04.993959 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:45:04.995545 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:45:04.998777 systemd-journald[1123]: Time spent on flushing to /var/log/journal/e0ae299b953f4348aa9e43f8947929b0 is 14.785ms for 998 entries. May 8 00:45:04.998777 systemd-journald[1123]: System Journal (/var/log/journal/e0ae299b953f4348aa9e43f8947929b0) is 8.0M, max 195.6M, 187.6M free. May 8 00:45:05.101099 systemd-journald[1123]: Received client request to flush runtime journal. May 8 00:45:05.101153 kernel: loop0: detected capacity change from 0 to 218376 May 8 00:45:05.009142 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:45:05.015446 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:45:05.092912 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 8 00:45:05.095490 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:45:05.097030 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:45:05.103404 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:45:05.110729 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:45:05.111011 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 8 00:45:05.115711 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:45:05.117106 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 8 00:45:05.121968 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:45:05.132495 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:45:05.134749 kernel: loop1: detected capacity change from 0 to 142488 May 8 00:45:05.154053 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. May 8 00:45:05.154075 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. May 8 00:45:05.161230 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:45:05.171664 kernel: loop2: detected capacity change from 0 to 140768 May 8 00:45:05.263685 kernel: loop3: detected capacity change from 0 to 218376 May 8 00:45:05.273674 kernel: loop4: detected capacity change from 0 to 142488 May 8 00:45:05.284671 kernel: loop5: detected capacity change from 0 to 140768 May 8 00:45:05.298067 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 8 00:45:05.298692 (sd-merge)[1192]: Merged extensions into '/usr'. May 8 00:45:05.302768 systemd[1]: Reloading requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:45:05.302944 systemd[1]: Reloading... May 8 00:45:05.438679 zram_generator::config[1214]: No configuration found. May 8 00:45:05.487015 ldconfig[1162]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:45:05.616570 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:45:05.665976 systemd[1]: Reloading finished in 362 ms. May 8 00:45:05.705740 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:45:05.707449 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:45:05.713295 systemd[1]: Starting ensure-sysext.service... May 8 00:45:05.715286 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:45:05.725458 systemd[1]: Reloading requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... May 8 00:45:05.725474 systemd[1]: Reloading... May 8 00:45:05.781668 zram_generator::config[1286]: No configuration found. May 8 00:45:05.800890 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:45:05.801310 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:45:05.802319 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:45:05.802796 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. May 8 00:45:05.802893 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. May 8 00:45:05.807367 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:45:05.807388 systemd-tmpfiles[1256]: Skipping /boot May 8 00:45:05.818443 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:45:05.818458 systemd-tmpfiles[1256]: Skipping /boot May 8 00:45:05.900437 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:45:05.954831 systemd[1]: Reloading finished in 228 ms. May 8 00:45:05.975239 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:45:05.987440 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:45:05.998028 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:45:06.000983 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:45:06.003946 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:45:06.009978 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:45:06.014019 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:45:06.017877 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:45:06.022144 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:45:06.022370 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:45:06.024152 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:45:06.028823 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:45:06.034185 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:45:06.035712 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:45:06.039933 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:45:06.041275 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:45:06.042411 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:45:06.042582 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:45:06.044774 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:45:06.044970 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:45:06.046796 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:45:06.046996 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:45:06.055130 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:45:06.058198 systemd-udevd[1328]: Using default interface naming scheme 'v255'. May 8 00:45:06.064847 augenrules[1351]: No rules May 8 00:45:06.065052 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:45:06.065467 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:45:06.070883 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:45:06.074694 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:45:06.080052 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:45:06.081254 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:45:06.084452 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:45:06.085567 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:45:06.086530 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:45:06.088479 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:45:06.096034 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:45:06.098020 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:45:06.098225 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:45:06.099905 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:45:06.100083 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:45:06.102224 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:45:06.102522 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:45:06.107331 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:45:06.122273 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:45:06.125208 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:45:06.147722 systemd[1]: Finished ensure-sysext.service. May 8 00:45:06.151227 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:45:06.151380 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:45:06.160896 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:45:06.164088 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:45:06.166807 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:45:06.173802 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:45:06.181663 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1378) May 8 00:45:06.183828 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:45:06.193478 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:45:06.200942 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:45:06.203705 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:45:06.203739 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:45:06.204284 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:45:06.204487 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:45:06.206165 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:45:06.206345 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:45:06.217089 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 8 00:45:06.213133 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 8 00:45:06.228428 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:45:06.228691 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:45:06.230243 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:45:06.231890 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:45:06.232093 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:45:06.235085 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:45:06.237684 kernel: ACPI: button: Power Button [PWRF] May 8 00:45:06.254613 systemd-resolved[1326]: Positive Trust Anchors: May 8 00:45:06.255438 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:45:06.255529 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:45:06.262603 systemd-resolved[1326]: Defaulting to hostname 'linux'. May 8 00:45:06.265665 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 8 00:45:06.267406 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:45:06.269807 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:45:06.277449 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 8 00:45:06.277718 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 8 00:45:06.277950 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 8 00:45:06.278127 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 8 00:45:06.280757 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:45:06.281957 systemd-networkd[1400]: lo: Link UP May 8 00:45:06.281970 systemd-networkd[1400]: lo: Gained carrier May 8 00:45:06.284780 systemd-networkd[1400]: Enumeration completed May 8 00:45:06.285428 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:45:06.285433 systemd-networkd[1400]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:45:06.286590 systemd-networkd[1400]: eth0: Link UP May 8 00:45:06.286595 systemd-networkd[1400]: eth0: Gained carrier May 8 00:45:06.286606 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:45:06.294277 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:45:06.295759 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:45:06.296936 systemd[1]: Reached target network.target - Network. May 8 00:45:06.301815 systemd-networkd[1400]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:45:06.305956 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:45:06.310197 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:45:07.303486 systemd-timesyncd[1402]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 00:45:07.303535 systemd-timesyncd[1402]: Initial clock synchronization to Thu 2025-05-08 00:45:07.303379 UTC. May 8 00:45:07.303569 systemd-resolved[1326]: Clock change detected. Flushing caches. May 8 00:45:07.305059 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:45:07.317308 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:45:07.358866 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:45:07.364452 kernel: mousedev: PS/2 mouse device common for all mice May 8 00:45:07.370213 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:45:07.370438 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:45:07.377671 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:45:07.440724 kernel: kvm_amd: TSC scaling supported May 8 00:45:07.440796 kernel: kvm_amd: Nested Virtualization enabled May 8 00:45:07.440812 kernel: kvm_amd: Nested Paging enabled May 8 00:45:07.441864 kernel: kvm_amd: LBR virtualization supported May 8 00:45:07.441891 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 8 00:45:07.443176 kernel: kvm_amd: Virtual GIF supported May 8 00:45:07.462433 kernel: EDAC MC: Ver: 3.0.0 May 8 00:45:07.472085 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:45:07.502675 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:45:07.517567 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:45:07.527723 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:45:07.563862 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:45:07.565615 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:45:07.566771 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:45:07.567988 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:45:07.569237 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:45:07.570719 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:45:07.572006 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:45:07.573377 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:45:07.574869 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:45:07.574917 systemd[1]: Reached target paths.target - Path Units. May 8 00:45:07.575885 systemd[1]: Reached target timers.target - Timer Units. May 8 00:45:07.577932 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:45:07.581058 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:45:07.592965 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:45:07.595663 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:45:07.597497 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:45:07.598785 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:45:07.599819 systemd[1]: Reached target basic.target - Basic System. May 8 00:45:07.600115 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:45:07.600143 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:45:07.601135 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:45:07.603226 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:45:07.607428 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:45:07.607754 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:45:07.612044 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:45:07.614309 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:45:07.615768 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:45:07.618520 jq[1437]: false May 8 00:45:07.618635 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:45:07.624678 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:45:07.634900 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:45:07.641453 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:45:07.644101 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:45:07.644779 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:45:07.645398 dbus-daemon[1436]: [system] SELinux support is enabled May 8 00:45:07.646478 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:45:07.650559 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:45:07.654038 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:45:07.664825 extend-filesystems[1438]: Found loop3 May 8 00:45:07.669254 extend-filesystems[1438]: Found loop4 May 8 00:45:07.669254 extend-filesystems[1438]: Found loop5 May 8 00:45:07.669254 extend-filesystems[1438]: Found sr0 May 8 00:45:07.669254 extend-filesystems[1438]: Found vda May 8 00:45:07.669254 extend-filesystems[1438]: Found vda1 May 8 00:45:07.669254 extend-filesystems[1438]: Found vda2 May 8 00:45:07.669254 extend-filesystems[1438]: Found vda3 May 8 00:45:07.669254 extend-filesystems[1438]: Found usr May 8 00:45:07.669254 extend-filesystems[1438]: Found vda4 May 8 00:45:07.669254 extend-filesystems[1438]: Found vda6 May 8 00:45:07.669254 extend-filesystems[1438]: Found vda7 May 8 00:45:07.669254 extend-filesystems[1438]: Found vda9 May 8 00:45:07.669254 extend-filesystems[1438]: Checking size of /dev/vda9 May 8 00:45:07.668352 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:45:07.699223 extend-filesystems[1438]: Resized partition /dev/vda9 May 8 00:45:07.699435 jq[1447]: true May 8 00:45:07.668680 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:45:07.699680 extend-filesystems[1462]: resize2fs 1.47.1 (20-May-2024) May 8 00:45:07.671921 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:45:07.677817 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:45:07.678044 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:45:07.680122 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:45:07.681594 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:45:07.717441 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 00:45:07.717500 update_engine[1445]: I20250508 00:45:07.717033 1445 main.cc:92] Flatcar Update Engine starting May 8 00:45:07.725953 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1361) May 8 00:45:07.724162 (ntainerd)[1463]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:45:07.726233 update_engine[1445]: I20250508 00:45:07.721132 1445 update_check_scheduler.cc:74] Next update check in 8m22s May 8 00:45:07.726258 sshd_keygen[1457]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:45:07.726355 jq[1461]: true May 8 00:45:07.736990 tar[1456]: linux-amd64/LICENSE May 8 00:45:07.736990 tar[1456]: linux-amd64/helm May 8 00:45:07.743620 systemd[1]: Started update-engine.service - Update Engine. May 8 00:45:07.746059 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:45:07.746099 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:45:07.749489 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:45:07.749516 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:45:07.753439 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 00:45:07.762624 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:45:07.785057 extend-filesystems[1462]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 00:45:07.785057 extend-filesystems[1462]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 00:45:07.785057 extend-filesystems[1462]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 00:45:07.805381 extend-filesystems[1438]: Resized filesystem in /dev/vda9 May 8 00:45:07.785501 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button) May 8 00:45:07.785521 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 00:45:07.786450 systemd-logind[1444]: New seat seat0. May 8 00:45:07.797273 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:45:07.804385 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:45:07.804648 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:45:07.811069 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:45:07.822804 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:45:07.823996 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:45:07.834314 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:45:07.834562 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:45:07.837103 bash[1496]: Updated "/home/core/.ssh/authorized_keys" May 8 00:45:07.867722 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:45:07.869747 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:45:07.872891 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 00:45:07.899259 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:45:07.940972 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:45:07.943741 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 8 00:45:07.945078 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:45:08.243650 containerd[1463]: time="2025-05-08T00:45:08.243490954Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 8 00:45:08.272963 containerd[1463]: time="2025-05-08T00:45:08.272864390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:45:08.275639 containerd[1463]: time="2025-05-08T00:45:08.275582118Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:45:08.275639 containerd[1463]: time="2025-05-08T00:45:08.275631891Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:45:08.275717 containerd[1463]: time="2025-05-08T00:45:08.275652730Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:45:08.275884 containerd[1463]: time="2025-05-08T00:45:08.275863326Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:45:08.275922 containerd[1463]: time="2025-05-08T00:45:08.275884515Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:45:08.275978 containerd[1463]: time="2025-05-08T00:45:08.275958805Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:45:08.275978 containerd[1463]: time="2025-05-08T00:45:08.275974885Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:45:08.276213 containerd[1463]: time="2025-05-08T00:45:08.276184177Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:45:08.276213 containerd[1463]: time="2025-05-08T00:45:08.276203053Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:45:08.276261 containerd[1463]: time="2025-05-08T00:45:08.276216478Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:45:08.276261 containerd[1463]: time="2025-05-08T00:45:08.276226757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:45:08.276354 containerd[1463]: time="2025-05-08T00:45:08.276331293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:45:08.276758 containerd[1463]: time="2025-05-08T00:45:08.276734039Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:45:08.277235 containerd[1463]: time="2025-05-08T00:45:08.276864383Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:45:08.277235 containerd[1463]: time="2025-05-08T00:45:08.276883038Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:45:08.277235 containerd[1463]: time="2025-05-08T00:45:08.277009335Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:45:08.277235 containerd[1463]: time="2025-05-08T00:45:08.277069047Z" level=info msg="metadata content store policy set" policy=shared May 8 00:45:08.284372 containerd[1463]: time="2025-05-08T00:45:08.284334024Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:45:08.284433 containerd[1463]: time="2025-05-08T00:45:08.284383928Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:45:08.284433 containerd[1463]: time="2025-05-08T00:45:08.284402523Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:45:08.284485 containerd[1463]: time="2025-05-08T00:45:08.284435705Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:45:08.284506 containerd[1463]: time="2025-05-08T00:45:08.284452737Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:45:08.284718 containerd[1463]: time="2025-05-08T00:45:08.284684041Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:45:08.284984 containerd[1463]: time="2025-05-08T00:45:08.284954328Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:45:08.285104 containerd[1463]: time="2025-05-08T00:45:08.285074904Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:45:08.285104 containerd[1463]: time="2025-05-08T00:45:08.285095873Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:45:08.285154 containerd[1463]: time="2025-05-08T00:45:08.285108617Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:45:08.285154 containerd[1463]: time="2025-05-08T00:45:08.285121882Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:45:08.285154 containerd[1463]: time="2025-05-08T00:45:08.285135277Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:45:08.285154 containerd[1463]: time="2025-05-08T00:45:08.285148342Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:45:08.285228 containerd[1463]: time="2025-05-08T00:45:08.285161376Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:45:08.285228 containerd[1463]: time="2025-05-08T00:45:08.285176675Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:45:08.285228 containerd[1463]: time="2025-05-08T00:45:08.285190320Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:45:08.285228 containerd[1463]: time="2025-05-08T00:45:08.285202313Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:45:08.285228 containerd[1463]: time="2025-05-08T00:45:08.285213754Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:45:08.285319 containerd[1463]: time="2025-05-08T00:45:08.285234132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:45:08.285319 containerd[1463]: time="2025-05-08T00:45:08.285249742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:45:08.285319 containerd[1463]: time="2025-05-08T00:45:08.285261333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:45:08.285319 containerd[1463]: time="2025-05-08T00:45:08.285273196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:45:08.285319 containerd[1463]: time="2025-05-08T00:45:08.285285098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:45:08.285319 containerd[1463]: time="2025-05-08T00:45:08.285297712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:45:08.285319 containerd[1463]: time="2025-05-08T00:45:08.285309313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:45:08.285319 containerd[1463]: time="2025-05-08T00:45:08.285322067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:45:08.285484 containerd[1463]: time="2025-05-08T00:45:08.285335923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:45:08.285484 containerd[1463]: time="2025-05-08T00:45:08.285349599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:45:08.285484 containerd[1463]: time="2025-05-08T00:45:08.285362483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:45:08.285484 containerd[1463]: time="2025-05-08T00:45:08.285373774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:45:08.285484 containerd[1463]: time="2025-05-08T00:45:08.285385466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:45:08.285484 containerd[1463]: time="2025-05-08T00:45:08.285399673Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:45:08.285484 containerd[1463]: time="2025-05-08T00:45:08.285441031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:45:08.285484 containerd[1463]: time="2025-05-08T00:45:08.285453594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:45:08.285484 containerd[1463]: time="2025-05-08T00:45:08.285463903Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:45:08.285649 containerd[1463]: time="2025-05-08T00:45:08.285526571Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:45:08.285649 containerd[1463]: time="2025-05-08T00:45:08.285543122Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:45:08.285649 containerd[1463]: time="2025-05-08T00:45:08.285554804Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:45:08.285649 containerd[1463]: time="2025-05-08T00:45:08.285566225Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:45:08.285649 containerd[1463]: time="2025-05-08T00:45:08.285587225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:45:08.285649 containerd[1463]: time="2025-05-08T00:45:08.285612522Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:45:08.285649 containerd[1463]: time="2025-05-08T00:45:08.285623202Z" level=info msg="NRI interface is disabled by configuration." May 8 00:45:08.285649 containerd[1463]: time="2025-05-08T00:45:08.285634503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:45:08.286000 containerd[1463]: time="2025-05-08T00:45:08.285938995Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:45:08.286000 containerd[1463]: time="2025-05-08T00:45:08.285995010Z" level=info msg="Connect containerd service" May 8 00:45:08.286235 containerd[1463]: time="2025-05-08T00:45:08.286046105Z" level=info msg="using legacy CRI server" May 8 00:45:08.286235 containerd[1463]: time="2025-05-08T00:45:08.286054251Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:45:08.286235 containerd[1463]: time="2025-05-08T00:45:08.286163706Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:45:08.286883 containerd[1463]: time="2025-05-08T00:45:08.286843641Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:45:08.287302 containerd[1463]: time="2025-05-08T00:45:08.287259281Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:45:08.287357 containerd[1463]: time="2025-05-08T00:45:08.287283577Z" level=info msg="Start subscribing containerd event" May 8 00:45:08.287357 containerd[1463]: time="2025-05-08T00:45:08.287318031Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:45:08.287396 containerd[1463]: time="2025-05-08T00:45:08.287358447Z" level=info msg="Start recovering state" May 8 00:45:08.287489 containerd[1463]: time="2025-05-08T00:45:08.287457343Z" level=info msg="Start event monitor" May 8 00:45:08.287489 containerd[1463]: time="2025-05-08T00:45:08.287482590Z" level=info msg="Start snapshots syncer" May 8 00:45:08.287547 containerd[1463]: time="2025-05-08T00:45:08.287494893Z" level=info msg="Start cni network conf syncer for default" May 8 00:45:08.287547 containerd[1463]: time="2025-05-08T00:45:08.287506465Z" level=info msg="Start streaming server" May 8 00:45:08.288176 containerd[1463]: time="2025-05-08T00:45:08.287605390Z" level=info msg="containerd successfully booted in 0.055295s" May 8 00:45:08.287889 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:45:08.424975 tar[1456]: linux-amd64/README.md May 8 00:45:08.444736 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:45:08.941643 systemd-networkd[1400]: eth0: Gained IPv6LL May 8 00:45:08.945215 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:45:08.947179 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:45:08.960675 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 8 00:45:08.963528 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:45:08.966209 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:45:08.987031 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:45:08.988940 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 00:45:08.989148 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 8 00:45:08.992297 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:45:10.351570 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:45:10.353313 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:45:10.356608 systemd[1]: Startup finished in 1.523s (kernel) + 7.401s (initrd) + 5.284s (userspace) = 14.209s. May 8 00:45:10.358795 (kubelet)[1549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:45:10.779860 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:45:10.781486 systemd[1]: Started sshd@0-10.0.0.130:22-10.0.0.1:59462.service - OpenSSH per-connection server daemon (10.0.0.1:59462). May 8 00:45:10.840371 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 59462 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:45:10.842492 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:45:10.853807 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:45:10.856067 systemd-logind[1444]: New session 1 of user core. May 8 00:45:10.869881 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:45:10.887540 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:45:10.899839 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:45:10.902995 (systemd)[1565]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:45:11.007379 kubelet[1549]: E0508 00:45:11.007296 1549 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:45:11.012031 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:45:11.012229 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:45:11.012631 systemd[1]: kubelet.service: Consumed 1.907s CPU time. May 8 00:45:11.035869 systemd[1565]: Queued start job for default target default.target. May 8 00:45:11.046663 systemd[1565]: Created slice app.slice - User Application Slice. May 8 00:45:11.046688 systemd[1565]: Reached target paths.target - Paths. May 8 00:45:11.046702 systemd[1565]: Reached target timers.target - Timers. May 8 00:45:11.048158 systemd[1565]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:45:11.060041 systemd[1565]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:45:11.060165 systemd[1565]: Reached target sockets.target - Sockets. May 8 00:45:11.060185 systemd[1565]: Reached target basic.target - Basic System. May 8 00:45:11.060222 systemd[1565]: Reached target default.target - Main User Target. May 8 00:45:11.060255 systemd[1565]: Startup finished in 147ms. May 8 00:45:11.060606 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:45:11.062094 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:45:11.125862 systemd[1]: Started sshd@1-10.0.0.130:22-10.0.0.1:59470.service - OpenSSH per-connection server daemon (10.0.0.1:59470). May 8 00:45:11.163785 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 59470 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:45:11.165275 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:45:11.169191 systemd-logind[1444]: New session 2 of user core. May 8 00:45:11.179552 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:45:11.232674 sshd[1577]: pam_unix(sshd:session): session closed for user core May 8 00:45:11.250162 systemd[1]: sshd@1-10.0.0.130:22-10.0.0.1:59470.service: Deactivated successfully. May 8 00:45:11.251756 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:45:11.253093 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. May 8 00:45:11.263694 systemd[1]: Started sshd@2-10.0.0.130:22-10.0.0.1:59486.service - OpenSSH per-connection server daemon (10.0.0.1:59486). May 8 00:45:11.264606 systemd-logind[1444]: Removed session 2. May 8 00:45:11.294014 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 59486 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:45:11.295459 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:45:11.299317 systemd-logind[1444]: New session 3 of user core. May 8 00:45:11.305550 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:45:11.353903 sshd[1584]: pam_unix(sshd:session): session closed for user core May 8 00:45:11.363198 systemd[1]: sshd@2-10.0.0.130:22-10.0.0.1:59486.service: Deactivated successfully. May 8 00:45:11.364912 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:45:11.366203 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. May 8 00:45:11.367460 systemd[1]: Started sshd@3-10.0.0.130:22-10.0.0.1:59490.service - OpenSSH per-connection server daemon (10.0.0.1:59490). May 8 00:45:11.368256 systemd-logind[1444]: Removed session 3. May 8 00:45:11.402753 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 59490 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:45:11.404285 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:45:11.407862 systemd-logind[1444]: New session 4 of user core. May 8 00:45:11.420550 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:45:11.476813 sshd[1591]: pam_unix(sshd:session): session closed for user core May 8 00:45:11.486325 systemd[1]: sshd@3-10.0.0.130:22-10.0.0.1:59490.service: Deactivated successfully. May 8 00:45:11.488029 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:45:11.489823 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. May 8 00:45:11.499666 systemd[1]: Started sshd@4-10.0.0.130:22-10.0.0.1:59496.service - OpenSSH per-connection server daemon (10.0.0.1:59496). May 8 00:45:11.500760 systemd-logind[1444]: Removed session 4. May 8 00:45:11.530574 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 59496 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:45:11.532170 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:45:11.536609 systemd-logind[1444]: New session 5 of user core. May 8 00:45:11.548572 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:45:11.606874 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:45:11.607223 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:45:11.627777 sudo[1601]: pam_unix(sudo:session): session closed for user root May 8 00:45:11.629922 sshd[1598]: pam_unix(sshd:session): session closed for user core May 8 00:45:11.646032 systemd[1]: sshd@4-10.0.0.130:22-10.0.0.1:59496.service: Deactivated successfully. May 8 00:45:11.647578 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:45:11.649363 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. May 8 00:45:11.659650 systemd[1]: Started sshd@5-10.0.0.130:22-10.0.0.1:59500.service - OpenSSH per-connection server daemon (10.0.0.1:59500). May 8 00:45:11.660504 systemd-logind[1444]: Removed session 5. May 8 00:45:11.692180 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 59500 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:45:11.693795 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:45:11.697644 systemd-logind[1444]: New session 6 of user core. May 8 00:45:11.707527 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:45:11.759890 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:45:11.760205 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:45:11.763592 sudo[1610]: pam_unix(sudo:session): session closed for user root May 8 00:45:11.769202 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 8 00:45:11.769542 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:45:11.788682 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 8 00:45:11.790371 auditctl[1613]: No rules May 8 00:45:11.791842 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:45:11.792171 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 8 00:45:11.794284 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:45:11.830389 augenrules[1631]: No rules May 8 00:45:11.832364 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:45:11.834069 sudo[1609]: pam_unix(sudo:session): session closed for user root May 8 00:45:11.836500 sshd[1606]: pam_unix(sshd:session): session closed for user core May 8 00:45:11.856375 systemd[1]: sshd@5-10.0.0.130:22-10.0.0.1:59500.service: Deactivated successfully. May 8 00:45:11.858058 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:45:11.859388 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. May 8 00:45:11.860654 systemd[1]: Started sshd@6-10.0.0.130:22-10.0.0.1:59506.service - OpenSSH per-connection server daemon (10.0.0.1:59506). May 8 00:45:11.861449 systemd-logind[1444]: Removed session 6. May 8 00:45:11.896318 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 59506 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:45:11.898143 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:45:11.902597 systemd-logind[1444]: New session 7 of user core. May 8 00:45:11.913554 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:45:11.968245 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:45:11.968611 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:45:12.459629 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:45:12.459757 (dockerd)[1660]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:45:13.252942 dockerd[1660]: time="2025-05-08T00:45:13.252859718Z" level=info msg="Starting up" May 8 00:45:14.613970 dockerd[1660]: time="2025-05-08T00:45:14.613688895Z" level=info msg="Loading containers: start." May 8 00:45:14.829443 kernel: Initializing XFRM netlink socket May 8 00:45:14.914079 systemd-networkd[1400]: docker0: Link UP May 8 00:45:14.942367 dockerd[1660]: time="2025-05-08T00:45:14.942302169Z" level=info msg="Loading containers: done." May 8 00:45:14.968768 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1859066134-merged.mount: Deactivated successfully. May 8 00:45:14.976877 dockerd[1660]: time="2025-05-08T00:45:14.976815707Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:45:14.976982 dockerd[1660]: time="2025-05-08T00:45:14.976960799Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 8 00:45:14.977126 dockerd[1660]: time="2025-05-08T00:45:14.977098407Z" level=info msg="Daemon has completed initialization" May 8 00:45:15.061673 dockerd[1660]: time="2025-05-08T00:45:15.061553390Z" level=info msg="API listen on /run/docker.sock" May 8 00:45:15.061782 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:45:16.315007 containerd[1463]: time="2025-05-08T00:45:16.314960534Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 8 00:45:17.548924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4051679477.mount: Deactivated successfully. May 8 00:45:20.514929 containerd[1463]: time="2025-05-08T00:45:20.514829548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:20.528520 containerd[1463]: time="2025-05-08T00:45:20.528377660Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" May 8 00:45:20.531637 containerd[1463]: time="2025-05-08T00:45:20.531561552Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:20.537685 containerd[1463]: time="2025-05-08T00:45:20.537611360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:20.539487 containerd[1463]: time="2025-05-08T00:45:20.539391028Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 4.224371884s" May 8 00:45:20.539622 containerd[1463]: time="2025-05-08T00:45:20.539495735Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 8 00:45:20.540580 containerd[1463]: time="2025-05-08T00:45:20.540547878Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 8 00:45:21.262947 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:45:21.273810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:45:21.576117 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:45:21.580447 (kubelet)[1872]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:45:22.556270 kubelet[1872]: E0508 00:45:22.556182 1872 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:45:22.563306 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:45:22.563578 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:45:23.066946 containerd[1463]: time="2025-05-08T00:45:23.066892218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:23.069274 containerd[1463]: time="2025-05-08T00:45:23.069195939Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" May 8 00:45:23.116608 containerd[1463]: time="2025-05-08T00:45:23.116560952Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:23.153877 containerd[1463]: time="2025-05-08T00:45:23.153835990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:23.154945 containerd[1463]: time="2025-05-08T00:45:23.154900376Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 2.614314427s" May 8 00:45:23.154945 containerd[1463]: time="2025-05-08T00:45:23.154946412Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 8 00:45:23.155447 containerd[1463]: time="2025-05-08T00:45:23.155426623Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 8 00:45:24.937701 containerd[1463]: time="2025-05-08T00:45:24.937629747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:24.938715 containerd[1463]: time="2025-05-08T00:45:24.938642496Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" May 8 00:45:24.940364 containerd[1463]: time="2025-05-08T00:45:24.940324411Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:24.943325 containerd[1463]: time="2025-05-08T00:45:24.943295544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:24.944393 containerd[1463]: time="2025-05-08T00:45:24.944353388Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.788898011s" May 8 00:45:24.944461 containerd[1463]: time="2025-05-08T00:45:24.944396379Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 8 00:45:24.944964 containerd[1463]: time="2025-05-08T00:45:24.944938356Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 8 00:45:26.603396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1495774969.mount: Deactivated successfully. May 8 00:45:27.147136 containerd[1463]: time="2025-05-08T00:45:27.147069444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:27.151637 containerd[1463]: time="2025-05-08T00:45:27.151597367Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 8 00:45:27.152822 containerd[1463]: time="2025-05-08T00:45:27.152773133Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:27.155872 containerd[1463]: time="2025-05-08T00:45:27.155835076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:27.156477 containerd[1463]: time="2025-05-08T00:45:27.156435713Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 2.211462742s" May 8 00:45:27.156477 containerd[1463]: time="2025-05-08T00:45:27.156469215Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 8 00:45:27.156989 containerd[1463]: time="2025-05-08T00:45:27.156952542Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 8 00:45:27.712891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3017134693.mount: Deactivated successfully. May 8 00:45:29.785198 containerd[1463]: time="2025-05-08T00:45:29.785136063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:29.786037 containerd[1463]: time="2025-05-08T00:45:29.785992820Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 8 00:45:29.787451 containerd[1463]: time="2025-05-08T00:45:29.787400120Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:29.790227 containerd[1463]: time="2025-05-08T00:45:29.790181607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:29.791819 containerd[1463]: time="2025-05-08T00:45:29.791768464Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.634775356s" May 8 00:45:29.791819 containerd[1463]: time="2025-05-08T00:45:29.791805022Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 8 00:45:29.792332 containerd[1463]: time="2025-05-08T00:45:29.792285704Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 8 00:45:30.235033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2951974323.mount: Deactivated successfully. May 8 00:45:30.240071 containerd[1463]: time="2025-05-08T00:45:30.240027049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:30.240865 containerd[1463]: time="2025-05-08T00:45:30.240807573Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 8 00:45:30.242282 containerd[1463]: time="2025-05-08T00:45:30.242251942Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:30.244471 containerd[1463]: time="2025-05-08T00:45:30.244434335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:30.245142 containerd[1463]: time="2025-05-08T00:45:30.245102990Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 452.785646ms" May 8 00:45:30.245196 containerd[1463]: time="2025-05-08T00:45:30.245143976Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 8 00:45:30.245646 containerd[1463]: time="2025-05-08T00:45:30.245615661Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 8 00:45:30.784660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3486127900.mount: Deactivated successfully. May 8 00:45:32.813737 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:45:32.822584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:45:33.137485 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:45:33.143397 (kubelet)[2017]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:45:33.305705 kubelet[2017]: E0508 00:45:33.305642 2017 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:45:33.310204 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:45:33.310523 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:45:33.724363 containerd[1463]: time="2025-05-08T00:45:33.724306952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:33.725322 containerd[1463]: time="2025-05-08T00:45:33.725289785Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 8 00:45:33.726693 containerd[1463]: time="2025-05-08T00:45:33.726646580Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:33.729675 containerd[1463]: time="2025-05-08T00:45:33.729641638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:45:33.730886 containerd[1463]: time="2025-05-08T00:45:33.730834225Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.485041452s" May 8 00:45:33.730926 containerd[1463]: time="2025-05-08T00:45:33.730884449Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 8 00:45:36.350941 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:45:36.362608 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:45:36.385683 systemd[1]: Reloading requested from client PID 2053 ('systemctl') (unit session-7.scope)... May 8 00:45:36.385700 systemd[1]: Reloading... May 8 00:45:36.474309 zram_generator::config[2092]: No configuration found. May 8 00:45:36.939827 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:45:37.017988 systemd[1]: Reloading finished in 631 ms. May 8 00:45:37.076671 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:45:37.079669 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:45:37.079935 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:45:37.081449 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:45:37.242050 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:45:37.246911 (kubelet)[2142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:45:37.374482 kubelet[2142]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:45:37.374482 kubelet[2142]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 00:45:37.374482 kubelet[2142]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:45:37.374910 kubelet[2142]: I0508 00:45:37.374579 2142 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:45:37.610803 kubelet[2142]: I0508 00:45:37.610666 2142 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 00:45:37.610803 kubelet[2142]: I0508 00:45:37.610700 2142 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:45:37.611030 kubelet[2142]: I0508 00:45:37.611007 2142 server.go:954] "Client rotation is on, will bootstrap in background" May 8 00:45:38.101923 kubelet[2142]: E0508 00:45:38.101782 2142 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" May 8 00:45:38.105075 kubelet[2142]: I0508 00:45:38.105022 2142 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:45:38.114130 kubelet[2142]: E0508 00:45:38.114087 2142 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:45:38.114130 kubelet[2142]: I0508 00:45:38.114123 2142 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:45:38.120313 kubelet[2142]: I0508 00:45:38.120286 2142 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:45:38.120666 kubelet[2142]: I0508 00:45:38.120611 2142 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:45:38.120931 kubelet[2142]: I0508 00:45:38.120655 2142 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:45:38.121056 kubelet[2142]: I0508 00:45:38.120938 2142 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:45:38.121056 kubelet[2142]: I0508 00:45:38.120950 2142 container_manager_linux.go:304] "Creating device plugin manager" May 8 00:45:38.121161 kubelet[2142]: I0508 00:45:38.121132 2142 state_mem.go:36] "Initialized new in-memory state store" May 8 00:45:38.124270 kubelet[2142]: I0508 00:45:38.124233 2142 kubelet.go:446] "Attempting to sync node with API server" May 8 00:45:38.124270 kubelet[2142]: I0508 00:45:38.124258 2142 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:45:38.124353 kubelet[2142]: I0508 00:45:38.124287 2142 kubelet.go:352] "Adding apiserver pod source" May 8 00:45:38.124353 kubelet[2142]: I0508 00:45:38.124300 2142 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:45:38.127379 kubelet[2142]: I0508 00:45:38.127322 2142 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:45:38.127978 kubelet[2142]: I0508 00:45:38.127798 2142 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:45:38.128667 kubelet[2142]: W0508 00:45:38.128084 2142 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused May 8 00:45:38.128667 kubelet[2142]: E0508 00:45:38.128154 2142 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" May 8 00:45:38.128980 kubelet[2142]: W0508 00:45:38.128869 2142 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:45:38.129322 kubelet[2142]: W0508 00:45:38.129244 2142 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused May 8 00:45:38.129378 kubelet[2142]: E0508 00:45:38.129340 2142 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" May 8 00:45:38.131286 kubelet[2142]: I0508 00:45:38.131256 2142 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 00:45:38.131335 kubelet[2142]: I0508 00:45:38.131303 2142 server.go:1287] "Started kubelet" May 8 00:45:38.131511 kubelet[2142]: I0508 00:45:38.131457 2142 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:45:38.134429 kubelet[2142]: I0508 00:45:38.131915 2142 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:45:38.134429 kubelet[2142]: I0508 00:45:38.131990 2142 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:45:38.134429 kubelet[2142]: I0508 00:45:38.132732 2142 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:45:38.134429 kubelet[2142]: I0508 00:45:38.132981 2142 server.go:490] "Adding debug handlers to kubelet server" May 8 00:45:38.134429 kubelet[2142]: I0508 00:45:38.133106 2142 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:45:38.134429 kubelet[2142]: E0508 00:45:38.134003 2142 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:45:38.134429 kubelet[2142]: I0508 00:45:38.134032 2142 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 00:45:38.134429 kubelet[2142]: I0508 00:45:38.134177 2142 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:45:38.134429 kubelet[2142]: I0508 00:45:38.134220 2142 reconciler.go:26] "Reconciler: start to sync state" May 8 00:45:38.134762 kubelet[2142]: W0508 00:45:38.134735 2142 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused May 8 00:45:38.134847 kubelet[2142]: E0508 00:45:38.134828 2142 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" May 8 00:45:38.135857 kubelet[2142]: E0508 00:45:38.135821 2142 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="200ms" May 8 00:45:38.136473 kubelet[2142]: E0508 00:45:38.135395 2142 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.130:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.130:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d66b05bdf8ef1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:45:38.131275505 +0000 UTC m=+0.802328700,LastTimestamp:2025-05-08 00:45:38.131275505 +0000 UTC m=+0.802328700,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:45:38.137468 kubelet[2142]: E0508 00:45:38.137444 2142 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:45:38.138186 kubelet[2142]: I0508 00:45:38.137749 2142 factory.go:221] Registration of the containerd container factory successfully May 8 00:45:38.138186 kubelet[2142]: I0508 00:45:38.137762 2142 factory.go:221] Registration of the systemd container factory successfully May 8 00:45:38.138186 kubelet[2142]: I0508 00:45:38.137852 2142 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:45:38.152995 kubelet[2142]: I0508 00:45:38.152969 2142 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 00:45:38.152995 kubelet[2142]: I0508 00:45:38.152985 2142 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 00:45:38.152995 kubelet[2142]: I0508 00:45:38.153005 2142 state_mem.go:36] "Initialized new in-memory state store" May 8 00:45:38.154885 kubelet[2142]: I0508 00:45:38.154836 2142 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:45:38.156263 kubelet[2142]: I0508 00:45:38.156238 2142 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:45:38.156318 kubelet[2142]: I0508 00:45:38.156271 2142 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 00:45:38.156318 kubelet[2142]: I0508 00:45:38.156294 2142 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 00:45:38.156318 kubelet[2142]: I0508 00:45:38.156305 2142 kubelet.go:2388] "Starting kubelet main sync loop" May 8 00:45:38.156375 kubelet[2142]: E0508 00:45:38.156348 2142 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:45:38.156945 kubelet[2142]: W0508 00:45:38.156884 2142 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused May 8 00:45:38.156945 kubelet[2142]: E0508 00:45:38.156934 2142 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" May 8 00:45:38.234670 kubelet[2142]: E0508 00:45:38.234608 2142 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:45:38.243589 kubelet[2142]: I0508 00:45:38.243557 2142 policy_none.go:49] "None policy: Start" May 8 00:45:38.243589 kubelet[2142]: I0508 00:45:38.243586 2142 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 00:45:38.243659 kubelet[2142]: I0508 00:45:38.243608 2142 state_mem.go:35] "Initializing new in-memory state store" May 8 00:45:38.256954 kubelet[2142]: E0508 00:45:38.256917 2142 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:45:38.335401 kubelet[2142]: E0508 00:45:38.335354 2142 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:45:38.336820 kubelet[2142]: E0508 00:45:38.336783 2142 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="400ms" May 8 00:45:38.370478 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:45:38.391585 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:45:38.394863 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:45:38.402315 kubelet[2142]: I0508 00:45:38.402282 2142 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:45:38.402699 kubelet[2142]: I0508 00:45:38.402556 2142 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:45:38.402699 kubelet[2142]: I0508 00:45:38.402576 2142 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:45:38.403324 kubelet[2142]: I0508 00:45:38.402907 2142 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:45:38.403793 kubelet[2142]: E0508 00:45:38.403688 2142 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 00:45:38.403793 kubelet[2142]: E0508 00:45:38.403733 2142 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:45:38.465746 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 8 00:45:38.495799 kubelet[2142]: E0508 00:45:38.495757 2142 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:45:38.497645 systemd[1]: Created slice kubepods-burstable-pod7b4df89ce81a163ddfbd9609c3360146.slice - libcontainer container kubepods-burstable-pod7b4df89ce81a163ddfbd9609c3360146.slice. May 8 00:45:38.504233 kubelet[2142]: I0508 00:45:38.504207 2142 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:45:38.504567 kubelet[2142]: E0508 00:45:38.504535 2142 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" May 8 00:45:38.508522 kubelet[2142]: E0508 00:45:38.508495 2142 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:45:38.511004 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 8 00:45:38.512527 kubelet[2142]: E0508 00:45:38.512495 2142 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:45:38.535718 kubelet[2142]: I0508 00:45:38.535686 2142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b4df89ce81a163ddfbd9609c3360146-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7b4df89ce81a163ddfbd9609c3360146\") " pod="kube-system/kube-apiserver-localhost" May 8 00:45:38.535782 kubelet[2142]: I0508 00:45:38.535720 2142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b4df89ce81a163ddfbd9609c3360146-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7b4df89ce81a163ddfbd9609c3360146\") " pod="kube-system/kube-apiserver-localhost" May 8 00:45:38.535782 kubelet[2142]: I0508 00:45:38.535748 2142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:45:38.535782 kubelet[2142]: I0508 00:45:38.535769 2142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:45:38.535869 kubelet[2142]: I0508 00:45:38.535784 2142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 8 00:45:38.535869 kubelet[2142]: I0508 00:45:38.535800 2142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b4df89ce81a163ddfbd9609c3360146-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7b4df89ce81a163ddfbd9609c3360146\") " pod="kube-system/kube-apiserver-localhost" May 8 00:45:38.535869 kubelet[2142]: I0508 00:45:38.535816 2142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:45:38.535869 kubelet[2142]: I0508 00:45:38.535846 2142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:45:38.535966 kubelet[2142]: I0508 00:45:38.535883 2142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:45:38.706424 kubelet[2142]: I0508 00:45:38.706294 2142 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:45:38.707136 kubelet[2142]: E0508 00:45:38.707082 2142 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" May 8 00:45:38.737734 kubelet[2142]: E0508 00:45:38.737685 2142 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="800ms" May 8 00:45:38.797019 kubelet[2142]: E0508 00:45:38.796983 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:38.797716 containerd[1463]: time="2025-05-08T00:45:38.797662066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 8 00:45:38.809820 kubelet[2142]: E0508 00:45:38.809785 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:38.810291 containerd[1463]: time="2025-05-08T00:45:38.810250218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7b4df89ce81a163ddfbd9609c3360146,Namespace:kube-system,Attempt:0,}" May 8 00:45:38.813428 kubelet[2142]: E0508 00:45:38.813378 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:38.813746 containerd[1463]: time="2025-05-08T00:45:38.813712722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 8 00:45:39.108468 kubelet[2142]: I0508 00:45:39.108351 2142 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:45:39.108668 kubelet[2142]: E0508 00:45:39.108637 2142 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" May 8 00:45:39.158667 kubelet[2142]: W0508 00:45:39.158618 2142 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused May 8 00:45:39.158719 kubelet[2142]: E0508 00:45:39.158676 2142 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" May 8 00:45:39.262286 kubelet[2142]: W0508 00:45:39.262204 2142 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused May 8 00:45:39.262286 kubelet[2142]: E0508 00:45:39.262281 2142 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" May 8 00:45:39.353604 kubelet[2142]: W0508 00:45:39.353524 2142 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused May 8 00:45:39.353604 kubelet[2142]: E0508 00:45:39.353596 2142 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" May 8 00:45:39.477326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1693478197.mount: Deactivated successfully. May 8 00:45:39.484739 containerd[1463]: time="2025-05-08T00:45:39.484687239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:45:39.486669 containerd[1463]: time="2025-05-08T00:45:39.486635172Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:45:39.487962 containerd[1463]: time="2025-05-08T00:45:39.487899624Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:45:39.489092 containerd[1463]: time="2025-05-08T00:45:39.489051344Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:45:39.490585 containerd[1463]: time="2025-05-08T00:45:39.490554053Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:45:39.490766 containerd[1463]: time="2025-05-08T00:45:39.490729332Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:45:39.491454 containerd[1463]: time="2025-05-08T00:45:39.491380694Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 8 00:45:39.492994 containerd[1463]: time="2025-05-08T00:45:39.492955528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:45:39.495846 containerd[1463]: time="2025-05-08T00:45:39.495807738Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 698.058197ms" May 8 00:45:39.500304 containerd[1463]: time="2025-05-08T00:45:39.500259257Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 689.907269ms" May 8 00:45:39.502291 containerd[1463]: time="2025-05-08T00:45:39.502262134Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 688.486433ms" May 8 00:45:39.538790 kubelet[2142]: E0508 00:45:39.538727 2142 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="1.6s" May 8 00:45:39.641516 containerd[1463]: time="2025-05-08T00:45:39.641222714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:45:39.641516 containerd[1463]: time="2025-05-08T00:45:39.641270925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:45:39.641516 containerd[1463]: time="2025-05-08T00:45:39.641282016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:45:39.641516 containerd[1463]: time="2025-05-08T00:45:39.641362416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:45:39.642064 containerd[1463]: time="2025-05-08T00:45:39.641629467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:45:39.642064 containerd[1463]: time="2025-05-08T00:45:39.641675544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:45:39.642064 containerd[1463]: time="2025-05-08T00:45:39.641690281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:45:39.642064 containerd[1463]: time="2025-05-08T00:45:39.641890627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:45:39.643931 containerd[1463]: time="2025-05-08T00:45:39.643874368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:45:39.644085 containerd[1463]: time="2025-05-08T00:45:39.643948267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:45:39.644085 containerd[1463]: time="2025-05-08T00:45:39.643975087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:45:39.644167 containerd[1463]: time="2025-05-08T00:45:39.644087969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:45:39.672623 systemd[1]: Started cri-containerd-4c27fe8033add2c14c97339f9c353b1962955446380e81671546fa882c7a2f5b.scope - libcontainer container 4c27fe8033add2c14c97339f9c353b1962955446380e81671546fa882c7a2f5b. May 8 00:45:39.678109 systemd[1]: Started cri-containerd-86a698859fd6701ed4673ba392637f89ceab0279c4e617c785524ac7421ff478.scope - libcontainer container 86a698859fd6701ed4673ba392637f89ceab0279c4e617c785524ac7421ff478. May 8 00:45:39.680480 systemd[1]: Started cri-containerd-b99e2ef3aedbbe4e6f3b5d6ce9d090ba5360fcfd9a99355074011d1cf0a6dbbd.scope - libcontainer container b99e2ef3aedbbe4e6f3b5d6ce9d090ba5360fcfd9a99355074011d1cf0a6dbbd. May 8 00:45:39.693773 kubelet[2142]: W0508 00:45:39.693690 2142 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused May 8 00:45:39.693773 kubelet[2142]: E0508 00:45:39.693732 2142 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" May 8 00:45:39.711841 containerd[1463]: time="2025-05-08T00:45:39.711793836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c27fe8033add2c14c97339f9c353b1962955446380e81671546fa882c7a2f5b\"" May 8 00:45:39.715697 kubelet[2142]: E0508 00:45:39.715666 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:39.719057 containerd[1463]: time="2025-05-08T00:45:39.718065099Z" level=info msg="CreateContainer within sandbox \"4c27fe8033add2c14c97339f9c353b1962955446380e81671546fa882c7a2f5b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:45:39.721906 containerd[1463]: time="2025-05-08T00:45:39.721803070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7b4df89ce81a163ddfbd9609c3360146,Namespace:kube-system,Attempt:0,} returns sandbox id \"86a698859fd6701ed4673ba392637f89ceab0279c4e617c785524ac7421ff478\"" May 8 00:45:39.722581 kubelet[2142]: E0508 00:45:39.722555 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:39.725110 containerd[1463]: time="2025-05-08T00:45:39.724956405Z" level=info msg="CreateContainer within sandbox \"86a698859fd6701ed4673ba392637f89ceab0279c4e617c785524ac7421ff478\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:45:39.727501 containerd[1463]: time="2025-05-08T00:45:39.727400269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"b99e2ef3aedbbe4e6f3b5d6ce9d090ba5360fcfd9a99355074011d1cf0a6dbbd\"" May 8 00:45:39.728487 kubelet[2142]: E0508 00:45:39.728456 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:39.730273 containerd[1463]: time="2025-05-08T00:45:39.730246397Z" level=info msg="CreateContainer within sandbox \"b99e2ef3aedbbe4e6f3b5d6ce9d090ba5360fcfd9a99355074011d1cf0a6dbbd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:45:39.746392 containerd[1463]: time="2025-05-08T00:45:39.746333222Z" level=info msg="CreateContainer within sandbox \"4c27fe8033add2c14c97339f9c353b1962955446380e81671546fa882c7a2f5b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ff97469e36bf32eb76044d4298dfaec467800197a6cf41309d52d18de88077f3\"" May 8 00:45:39.746904 containerd[1463]: time="2025-05-08T00:45:39.746864899Z" level=info msg="StartContainer for \"ff97469e36bf32eb76044d4298dfaec467800197a6cf41309d52d18de88077f3\"" May 8 00:45:39.748715 containerd[1463]: time="2025-05-08T00:45:39.748648615Z" level=info msg="CreateContainer within sandbox \"86a698859fd6701ed4673ba392637f89ceab0279c4e617c785524ac7421ff478\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"aa886a825cedae18bd8813da5dde69283487165457233946f02fd6e215c9c9bc\"" May 8 00:45:39.748944 containerd[1463]: time="2025-05-08T00:45:39.748920274Z" level=info msg="StartContainer for \"aa886a825cedae18bd8813da5dde69283487165457233946f02fd6e215c9c9bc\"" May 8 00:45:39.757528 containerd[1463]: time="2025-05-08T00:45:39.757490019Z" level=info msg="CreateContainer within sandbox \"b99e2ef3aedbbe4e6f3b5d6ce9d090ba5360fcfd9a99355074011d1cf0a6dbbd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1bbeda3a958d5e1ddc4c1457dbd95dab97dcdb8fc0ec6a9e21ded6fdcd88d17f\"" May 8 00:45:39.758513 containerd[1463]: time="2025-05-08T00:45:39.758460118Z" level=info msg="StartContainer for \"1bbeda3a958d5e1ddc4c1457dbd95dab97dcdb8fc0ec6a9e21ded6fdcd88d17f\"" May 8 00:45:39.775532 systemd[1]: Started cri-containerd-ff97469e36bf32eb76044d4298dfaec467800197a6cf41309d52d18de88077f3.scope - libcontainer container ff97469e36bf32eb76044d4298dfaec467800197a6cf41309d52d18de88077f3. May 8 00:45:39.779471 systemd[1]: Started cri-containerd-aa886a825cedae18bd8813da5dde69283487165457233946f02fd6e215c9c9bc.scope - libcontainer container aa886a825cedae18bd8813da5dde69283487165457233946f02fd6e215c9c9bc. May 8 00:45:39.783585 systemd[1]: Started cri-containerd-1bbeda3a958d5e1ddc4c1457dbd95dab97dcdb8fc0ec6a9e21ded6fdcd88d17f.scope - libcontainer container 1bbeda3a958d5e1ddc4c1457dbd95dab97dcdb8fc0ec6a9e21ded6fdcd88d17f. May 8 00:45:39.830616 containerd[1463]: time="2025-05-08T00:45:39.830568713Z" level=info msg="StartContainer for \"1bbeda3a958d5e1ddc4c1457dbd95dab97dcdb8fc0ec6a9e21ded6fdcd88d17f\" returns successfully" May 8 00:45:39.830992 containerd[1463]: time="2025-05-08T00:45:39.830666958Z" level=info msg="StartContainer for \"aa886a825cedae18bd8813da5dde69283487165457233946f02fd6e215c9c9bc\" returns successfully" May 8 00:45:39.830992 containerd[1463]: time="2025-05-08T00:45:39.830756926Z" level=info msg="StartContainer for \"ff97469e36bf32eb76044d4298dfaec467800197a6cf41309d52d18de88077f3\" returns successfully" May 8 00:45:39.912780 kubelet[2142]: I0508 00:45:39.912737 2142 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:45:40.168224 kubelet[2142]: E0508 00:45:40.167914 2142 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:45:40.168224 kubelet[2142]: E0508 00:45:40.168048 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:40.172779 kubelet[2142]: E0508 00:45:40.172168 2142 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:45:40.172779 kubelet[2142]: E0508 00:45:40.172267 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:40.175982 kubelet[2142]: E0508 00:45:40.175968 2142 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:45:40.176967 kubelet[2142]: E0508 00:45:40.176954 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:40.663182 kubelet[2142]: I0508 00:45:40.662450 2142 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 8 00:45:40.663182 kubelet[2142]: E0508 00:45:40.662502 2142 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 8 00:45:40.669985 kubelet[2142]: E0508 00:45:40.669791 2142 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:45:40.770255 kubelet[2142]: E0508 00:45:40.770195 2142 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:45:40.871138 kubelet[2142]: E0508 00:45:40.871084 2142 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:45:40.972090 kubelet[2142]: E0508 00:45:40.971978 2142 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:45:41.072743 kubelet[2142]: E0508 00:45:41.072700 2142 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:45:41.173197 kubelet[2142]: E0508 00:45:41.173162 2142 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:45:41.177468 kubelet[2142]: E0508 00:45:41.177440 2142 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:45:41.177588 kubelet[2142]: E0508 00:45:41.177555 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:41.177588 kubelet[2142]: E0508 00:45:41.177569 2142 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:45:41.177667 kubelet[2142]: E0508 00:45:41.177649 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:41.274301 kubelet[2142]: E0508 00:45:41.274202 2142 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:45:41.375044 kubelet[2142]: E0508 00:45:41.375013 2142 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:45:41.475835 kubelet[2142]: E0508 00:45:41.475810 2142 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:45:41.576181 kubelet[2142]: E0508 00:45:41.576051 2142 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:45:41.677139 kubelet[2142]: E0508 00:45:41.677086 2142 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:45:41.736236 kubelet[2142]: I0508 00:45:41.736189 2142 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 8 00:45:41.743598 kubelet[2142]: I0508 00:45:41.743562 2142 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 8 00:45:41.748649 kubelet[2142]: I0508 00:45:41.748631 2142 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 8 00:45:42.129981 kubelet[2142]: I0508 00:45:42.129951 2142 apiserver.go:52] "Watching apiserver" May 8 00:45:42.131683 kubelet[2142]: E0508 00:45:42.131656 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:42.134939 kubelet[2142]: I0508 00:45:42.134908 2142 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:45:42.178285 kubelet[2142]: I0508 00:45:42.178244 2142 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 8 00:45:42.178285 kubelet[2142]: E0508 00:45:42.178251 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:42.182765 kubelet[2142]: E0508 00:45:42.182739 2142 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 8 00:45:42.182871 kubelet[2142]: E0508 00:45:42.182843 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:42.692632 systemd[1]: Reloading requested from client PID 2420 ('systemctl') (unit session-7.scope)... May 8 00:45:42.692652 systemd[1]: Reloading... May 8 00:45:42.765437 zram_generator::config[2465]: No configuration found. May 8 00:45:42.865651 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:45:42.956048 systemd[1]: Reloading finished in 263 ms. May 8 00:45:43.002096 kubelet[2142]: I0508 00:45:43.002038 2142 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:45:43.002156 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:45:43.022527 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:45:43.022789 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:45:43.031725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:45:43.185894 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:45:43.190647 (kubelet)[2504]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:45:43.227380 kubelet[2504]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:45:43.227380 kubelet[2504]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 00:45:43.227380 kubelet[2504]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:45:43.227380 kubelet[2504]: I0508 00:45:43.227332 2504 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:45:43.235119 kubelet[2504]: I0508 00:45:43.235066 2504 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 00:45:43.235119 kubelet[2504]: I0508 00:45:43.235101 2504 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:45:43.236436 kubelet[2504]: I0508 00:45:43.235685 2504 server.go:954] "Client rotation is on, will bootstrap in background" May 8 00:45:43.237514 kubelet[2504]: I0508 00:45:43.237488 2504 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:45:43.239622 kubelet[2504]: I0508 00:45:43.239600 2504 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:45:43.242272 kubelet[2504]: E0508 00:45:43.242250 2504 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:45:43.243006 kubelet[2504]: I0508 00:45:43.242331 2504 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:45:43.247108 kubelet[2504]: I0508 00:45:43.247079 2504 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:45:43.247383 kubelet[2504]: I0508 00:45:43.247343 2504 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:45:43.247549 kubelet[2504]: I0508 00:45:43.247375 2504 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:45:43.247549 kubelet[2504]: I0508 00:45:43.247547 2504 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:45:43.247668 kubelet[2504]: I0508 00:45:43.247556 2504 container_manager_linux.go:304] "Creating device plugin manager" May 8 00:45:43.247668 kubelet[2504]: I0508 00:45:43.247598 2504 state_mem.go:36] "Initialized new in-memory state store" May 8 00:45:43.247760 kubelet[2504]: I0508 00:45:43.247743 2504 kubelet.go:446] "Attempting to sync node with API server" May 8 00:45:43.247760 kubelet[2504]: I0508 00:45:43.247757 2504 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:45:43.247808 kubelet[2504]: I0508 00:45:43.247772 2504 kubelet.go:352] "Adding apiserver pod source" May 8 00:45:43.247808 kubelet[2504]: I0508 00:45:43.247783 2504 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:45:43.251606 kubelet[2504]: I0508 00:45:43.251588 2504 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:45:43.251971 kubelet[2504]: I0508 00:45:43.251955 2504 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:45:43.252382 kubelet[2504]: I0508 00:45:43.252364 2504 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 00:45:43.252431 kubelet[2504]: I0508 00:45:43.252396 2504 server.go:1287] "Started kubelet" May 8 00:45:43.255978 kubelet[2504]: I0508 00:45:43.255932 2504 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:45:43.256231 kubelet[2504]: I0508 00:45:43.256214 2504 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:45:43.256881 kubelet[2504]: I0508 00:45:43.256855 2504 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:45:43.257671 kubelet[2504]: I0508 00:45:43.257646 2504 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:45:43.259484 kubelet[2504]: I0508 00:45:43.258616 2504 server.go:490] "Adding debug handlers to kubelet server" May 8 00:45:43.259484 kubelet[2504]: I0508 00:45:43.259349 2504 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:45:43.259543 kubelet[2504]: I0508 00:45:43.259503 2504 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 00:45:43.259605 kubelet[2504]: I0508 00:45:43.259582 2504 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:45:43.259742 kubelet[2504]: I0508 00:45:43.259720 2504 reconciler.go:26] "Reconciler: start to sync state" May 8 00:45:43.269436 kubelet[2504]: I0508 00:45:43.266927 2504 factory.go:221] Registration of the systemd container factory successfully May 8 00:45:43.269436 kubelet[2504]: I0508 00:45:43.267036 2504 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:45:43.271709 kubelet[2504]: E0508 00:45:43.271678 2504 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:45:43.277434 kubelet[2504]: I0508 00:45:43.274923 2504 factory.go:221] Registration of the containerd container factory successfully May 8 00:45:43.296267 kubelet[2504]: I0508 00:45:43.296174 2504 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:45:43.297998 kubelet[2504]: I0508 00:45:43.297955 2504 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:45:43.298097 kubelet[2504]: I0508 00:45:43.298010 2504 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 00:45:43.298097 kubelet[2504]: I0508 00:45:43.298038 2504 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 00:45:43.298097 kubelet[2504]: I0508 00:45:43.298046 2504 kubelet.go:2388] "Starting kubelet main sync loop" May 8 00:45:43.298163 kubelet[2504]: E0508 00:45:43.298113 2504 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:45:43.318738 kubelet[2504]: I0508 00:45:43.318702 2504 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 00:45:43.318738 kubelet[2504]: I0508 00:45:43.318723 2504 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 00:45:43.318738 kubelet[2504]: I0508 00:45:43.318741 2504 state_mem.go:36] "Initialized new in-memory state store" May 8 00:45:43.318949 kubelet[2504]: I0508 00:45:43.318877 2504 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:45:43.318949 kubelet[2504]: I0508 00:45:43.318888 2504 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:45:43.318949 kubelet[2504]: I0508 00:45:43.318905 2504 policy_none.go:49] "None policy: Start" May 8 00:45:43.318949 kubelet[2504]: I0508 00:45:43.318915 2504 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 00:45:43.318949 kubelet[2504]: I0508 00:45:43.318924 2504 state_mem.go:35] "Initializing new in-memory state store" May 8 00:45:43.319105 kubelet[2504]: I0508 00:45:43.319011 2504 state_mem.go:75] "Updated machine memory state" May 8 00:45:43.322400 kubelet[2504]: I0508 00:45:43.322375 2504 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:45:43.322867 kubelet[2504]: I0508 00:45:43.322576 2504 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:45:43.322867 kubelet[2504]: I0508 00:45:43.322592 2504 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:45:43.322867 kubelet[2504]: I0508 00:45:43.322788 2504 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:45:43.324829 kubelet[2504]: E0508 00:45:43.323974 2504 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 00:45:43.398897 kubelet[2504]: I0508 00:45:43.398839 2504 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 8 00:45:43.398897 kubelet[2504]: I0508 00:45:43.398879 2504 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 8 00:45:43.399097 kubelet[2504]: I0508 00:45:43.399066 2504 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 8 00:45:43.404642 kubelet[2504]: E0508 00:45:43.404599 2504 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 8 00:45:43.404988 kubelet[2504]: E0508 00:45:43.404969 2504 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:45:43.405025 kubelet[2504]: E0508 00:45:43.404994 2504 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 8 00:45:43.429308 kubelet[2504]: I0508 00:45:43.429268 2504 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:45:43.435946 kubelet[2504]: I0508 00:45:43.435918 2504 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 8 00:45:43.436103 kubelet[2504]: I0508 00:45:43.435999 2504 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 8 00:45:43.460981 kubelet[2504]: I0508 00:45:43.460915 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b4df89ce81a163ddfbd9609c3360146-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7b4df89ce81a163ddfbd9609c3360146\") " pod="kube-system/kube-apiserver-localhost" May 8 00:45:43.460981 kubelet[2504]: I0508 00:45:43.460969 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:45:43.461143 kubelet[2504]: I0508 00:45:43.460995 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:45:43.461143 kubelet[2504]: I0508 00:45:43.461013 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:45:43.461143 kubelet[2504]: I0508 00:45:43.461034 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 8 00:45:43.461143 kubelet[2504]: I0508 00:45:43.461049 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b4df89ce81a163ddfbd9609c3360146-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7b4df89ce81a163ddfbd9609c3360146\") " pod="kube-system/kube-apiserver-localhost" May 8 00:45:43.461143 kubelet[2504]: I0508 00:45:43.461065 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b4df89ce81a163ddfbd9609c3360146-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7b4df89ce81a163ddfbd9609c3360146\") " pod="kube-system/kube-apiserver-localhost" May 8 00:45:43.461262 kubelet[2504]: I0508 00:45:43.461082 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:45:43.461262 kubelet[2504]: I0508 00:45:43.461095 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:45:43.688975 sudo[2543]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 8 00:45:43.689334 sudo[2543]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 8 00:45:43.705192 kubelet[2504]: E0508 00:45:43.705154 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:43.705356 kubelet[2504]: E0508 00:45:43.705318 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:43.705572 kubelet[2504]: E0508 00:45:43.705440 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:44.148633 sudo[2543]: pam_unix(sudo:session): session closed for user root May 8 00:45:44.248871 kubelet[2504]: I0508 00:45:44.248824 2504 apiserver.go:52] "Watching apiserver" May 8 00:45:44.259765 kubelet[2504]: I0508 00:45:44.259724 2504 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:45:44.309913 kubelet[2504]: E0508 00:45:44.309761 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:44.309913 kubelet[2504]: I0508 00:45:44.309764 2504 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 8 00:45:44.310920 kubelet[2504]: I0508 00:45:44.310902 2504 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 8 00:45:44.317902 kubelet[2504]: E0508 00:45:44.317857 2504 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:45:44.318033 kubelet[2504]: E0508 00:45:44.318024 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:44.319041 kubelet[2504]: E0508 00:45:44.318515 2504 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 8 00:45:44.319041 kubelet[2504]: E0508 00:45:44.318613 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:44.336604 kubelet[2504]: I0508 00:45:44.336530 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.336504982 podStartE2EDuration="3.336504982s" podCreationTimestamp="2025-05-08 00:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:45:44.330057888 +0000 UTC m=+1.135413937" watchObservedRunningTime="2025-05-08 00:45:44.336504982 +0000 UTC m=+1.141861032" May 8 00:45:44.344299 kubelet[2504]: I0508 00:45:44.344256 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.344239834 podStartE2EDuration="3.344239834s" podCreationTimestamp="2025-05-08 00:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:45:44.337199979 +0000 UTC m=+1.142556028" watchObservedRunningTime="2025-05-08 00:45:44.344239834 +0000 UTC m=+1.149595883" May 8 00:45:44.353044 kubelet[2504]: I0508 00:45:44.352990 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.3529741140000002 podStartE2EDuration="3.352974114s" podCreationTimestamp="2025-05-08 00:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:45:44.344335467 +0000 UTC m=+1.149691516" watchObservedRunningTime="2025-05-08 00:45:44.352974114 +0000 UTC m=+1.158330163" May 8 00:45:45.311375 kubelet[2504]: E0508 00:45:45.311336 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:45.311912 kubelet[2504]: E0508 00:45:45.311534 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:45.820332 sudo[1642]: pam_unix(sudo:session): session closed for user root May 8 00:45:45.822524 sshd[1639]: pam_unix(sshd:session): session closed for user core May 8 00:45:45.827278 systemd[1]: sshd@6-10.0.0.130:22-10.0.0.1:59506.service: Deactivated successfully. May 8 00:45:45.829646 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:45:45.829914 systemd[1]: session-7.scope: Consumed 5.041s CPU time, 160.7M memory peak, 0B memory swap peak. May 8 00:45:45.830465 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. May 8 00:45:45.831609 systemd-logind[1444]: Removed session 7. May 8 00:45:46.385230 kubelet[2504]: E0508 00:45:46.385186 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:47.106115 kubelet[2504]: E0508 00:45:47.106064 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:48.806432 kubelet[2504]: E0508 00:45:48.806351 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:49.189987 kubelet[2504]: I0508 00:45:49.189922 2504 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:45:49.190337 containerd[1463]: time="2025-05-08T00:45:49.190285667Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:45:49.190919 kubelet[2504]: I0508 00:45:49.190536 2504 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:45:49.216953 systemd[1]: Created slice kubepods-besteffort-poda00eef73_77c3_406f_ab70_63880c1230f2.slice - libcontainer container kubepods-besteffort-poda00eef73_77c3_406f_ab70_63880c1230f2.slice. May 8 00:45:49.236387 systemd[1]: Created slice kubepods-burstable-podc0ce0610_2ac9_45d0_9e0f_b4d60f4f3b8f.slice - libcontainer container kubepods-burstable-podc0ce0610_2ac9_45d0_9e0f_b4d60f4f3b8f.slice. May 8 00:45:49.300792 kubelet[2504]: I0508 00:45:49.300298 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-lib-modules\") pod \"cilium-8gzv8\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " pod="kube-system/cilium-8gzv8" May 8 00:45:49.300792 kubelet[2504]: I0508 00:45:49.300346 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-xtables-lock\") pod \"cilium-8gzv8\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " pod="kube-system/cilium-8gzv8" May 8 00:45:49.300792 kubelet[2504]: I0508 00:45:49.300373 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-cilium-config-path\") pod \"cilium-8gzv8\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " pod="kube-system/cilium-8gzv8" May 8 00:45:49.300792 kubelet[2504]: I0508 00:45:49.300395 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a00eef73-77c3-406f-ab70-63880c1230f2-xtables-lock\") pod \"kube-proxy-v45s5\" (UID: \"a00eef73-77c3-406f-ab70-63880c1230f2\") " pod="kube-system/kube-proxy-v45s5" May 8 00:45:49.300792 kubelet[2504]: I0508 00:45:49.300440 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-cilium-run\") pod \"cilium-8gzv8\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " pod="kube-system/cilium-8gzv8" May 8 00:45:49.300792 kubelet[2504]: I0508 00:45:49.300463 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-hubble-tls\") pod \"cilium-8gzv8\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " pod="kube-system/cilium-8gzv8" May 8 00:45:49.301096 kubelet[2504]: I0508 00:45:49.300485 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thsxh\" (UniqueName: \"kubernetes.io/projected/a00eef73-77c3-406f-ab70-63880c1230f2-kube-api-access-thsxh\") pod \"kube-proxy-v45s5\" (UID: \"a00eef73-77c3-406f-ab70-63880c1230f2\") " pod="kube-system/kube-proxy-v45s5" May 8 00:45:49.301096 kubelet[2504]: I0508 00:45:49.300511 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-host-proc-sys-net\") pod \"cilium-8gzv8\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " pod="kube-system/cilium-8gzv8" May 8 00:45:49.301096 kubelet[2504]: I0508 00:45:49.300531 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-cni-path\") pod \"cilium-8gzv8\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " pod="kube-system/cilium-8gzv8" May 8 00:45:49.301096 kubelet[2504]: I0508 00:45:49.300554 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-clustermesh-secrets\") pod \"cilium-8gzv8\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " pod="kube-system/cilium-8gzv8" May 8 00:45:49.301096 kubelet[2504]: I0508 00:45:49.300575 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-host-proc-sys-kernel\") pod \"cilium-8gzv8\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " pod="kube-system/cilium-8gzv8" May 8 00:45:49.301210 kubelet[2504]: I0508 00:45:49.300598 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a00eef73-77c3-406f-ab70-63880c1230f2-kube-proxy\") pod \"kube-proxy-v45s5\" (UID: \"a00eef73-77c3-406f-ab70-63880c1230f2\") " pod="kube-system/kube-proxy-v45s5" May 8 00:45:49.301210 kubelet[2504]: I0508 00:45:49.300624 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-cilium-cgroup\") pod \"cilium-8gzv8\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " pod="kube-system/cilium-8gzv8" May 8 00:45:49.301210 kubelet[2504]: I0508 00:45:49.300648 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-hostproc\") pod \"cilium-8gzv8\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " pod="kube-system/cilium-8gzv8" May 8 00:45:49.301210 kubelet[2504]: I0508 00:45:49.300723 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-bpf-maps\") pod \"cilium-8gzv8\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " pod="kube-system/cilium-8gzv8" May 8 00:45:49.301210 kubelet[2504]: I0508 00:45:49.300783 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjkhf\" (UniqueName: \"kubernetes.io/projected/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-kube-api-access-xjkhf\") pod \"cilium-8gzv8\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " pod="kube-system/cilium-8gzv8" May 8 00:45:49.301210 kubelet[2504]: I0508 00:45:49.300811 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-etc-cni-netd\") pod \"cilium-8gzv8\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " pod="kube-system/cilium-8gzv8" May 8 00:45:49.301342 kubelet[2504]: I0508 00:45:49.300833 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a00eef73-77c3-406f-ab70-63880c1230f2-lib-modules\") pod \"kube-proxy-v45s5\" (UID: \"a00eef73-77c3-406f-ab70-63880c1230f2\") " pod="kube-system/kube-proxy-v45s5" May 8 00:45:49.408546 kubelet[2504]: E0508 00:45:49.408497 2504 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 8 00:45:49.408546 kubelet[2504]: E0508 00:45:49.408533 2504 projected.go:194] Error preparing data for projected volume kube-api-access-thsxh for pod kube-system/kube-proxy-v45s5: configmap "kube-root-ca.crt" not found May 8 00:45:49.408867 kubelet[2504]: E0508 00:45:49.408590 2504 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 8 00:45:49.408867 kubelet[2504]: E0508 00:45:49.408614 2504 projected.go:194] Error preparing data for projected volume kube-api-access-xjkhf for pod kube-system/cilium-8gzv8: configmap "kube-root-ca.crt" not found May 8 00:45:49.408867 kubelet[2504]: E0508 00:45:49.408598 2504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a00eef73-77c3-406f-ab70-63880c1230f2-kube-api-access-thsxh podName:a00eef73-77c3-406f-ab70-63880c1230f2 nodeName:}" failed. No retries permitted until 2025-05-08 00:45:49.908576187 +0000 UTC m=+6.713932236 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-thsxh" (UniqueName: "kubernetes.io/projected/a00eef73-77c3-406f-ab70-63880c1230f2-kube-api-access-thsxh") pod "kube-proxy-v45s5" (UID: "a00eef73-77c3-406f-ab70-63880c1230f2") : configmap "kube-root-ca.crt" not found May 8 00:45:49.408867 kubelet[2504]: E0508 00:45:49.408667 2504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-kube-api-access-xjkhf podName:c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f nodeName:}" failed. No retries permitted until 2025-05-08 00:45:49.908648294 +0000 UTC m=+6.714004343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xjkhf" (UniqueName: "kubernetes.io/projected/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-kube-api-access-xjkhf") pod "cilium-8gzv8" (UID: "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f") : configmap "kube-root-ca.crt" not found May 8 00:45:50.270251 systemd[1]: Created slice kubepods-besteffort-podd1a57d54_ed0c_4214_a2db_7de2b0ea2ecf.slice - libcontainer container kubepods-besteffort-podd1a57d54_ed0c_4214_a2db_7de2b0ea2ecf.slice. May 8 00:45:50.308809 kubelet[2504]: I0508 00:45:50.308741 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7txp\" (UniqueName: \"kubernetes.io/projected/d1a57d54-ed0c-4214-a2db-7de2b0ea2ecf-kube-api-access-t7txp\") pod \"cilium-operator-6c4d7847fc-xqzs8\" (UID: \"d1a57d54-ed0c-4214-a2db-7de2b0ea2ecf\") " pod="kube-system/cilium-operator-6c4d7847fc-xqzs8" May 8 00:45:50.308809 kubelet[2504]: I0508 00:45:50.308805 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d1a57d54-ed0c-4214-a2db-7de2b0ea2ecf-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-xqzs8\" (UID: \"d1a57d54-ed0c-4214-a2db-7de2b0ea2ecf\") " pod="kube-system/cilium-operator-6c4d7847fc-xqzs8" May 8 00:45:50.434420 kubelet[2504]: E0508 00:45:50.434364 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:50.435110 containerd[1463]: time="2025-05-08T00:45:50.435024876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v45s5,Uid:a00eef73-77c3-406f-ab70-63880c1230f2,Namespace:kube-system,Attempt:0,}" May 8 00:45:50.442878 kubelet[2504]: E0508 00:45:50.442818 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:50.443596 containerd[1463]: time="2025-05-08T00:45:50.443557872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8gzv8,Uid:c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f,Namespace:kube-system,Attempt:0,}" May 8 00:45:50.466198 containerd[1463]: time="2025-05-08T00:45:50.465559171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:45:50.466198 containerd[1463]: time="2025-05-08T00:45:50.466148218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:45:50.466198 containerd[1463]: time="2025-05-08T00:45:50.466162054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:45:50.466507 containerd[1463]: time="2025-05-08T00:45:50.466377984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:45:50.486684 containerd[1463]: time="2025-05-08T00:45:50.485731908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:45:50.487336 containerd[1463]: time="2025-05-08T00:45:50.486657625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:45:50.487336 containerd[1463]: time="2025-05-08T00:45:50.486682803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:45:50.487336 containerd[1463]: time="2025-05-08T00:45:50.486828520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:45:50.493736 systemd[1]: Started cri-containerd-d7498f00c1e822b8e690e06cb75d29d65657ee7f90c5246abf7f47fb8a81c2a0.scope - libcontainer container d7498f00c1e822b8e690e06cb75d29d65657ee7f90c5246abf7f47fb8a81c2a0. May 8 00:45:50.503631 systemd[1]: Started cri-containerd-7010103a22fea0b6557175f1c58454ec53a46fddbb2f99d6a2047a61dc01a200.scope - libcontainer container 7010103a22fea0b6557175f1c58454ec53a46fddbb2f99d6a2047a61dc01a200. May 8 00:45:50.531392 containerd[1463]: time="2025-05-08T00:45:50.529794479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v45s5,Uid:a00eef73-77c3-406f-ab70-63880c1230f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7498f00c1e822b8e690e06cb75d29d65657ee7f90c5246abf7f47fb8a81c2a0\"" May 8 00:45:50.531940 kubelet[2504]: E0508 00:45:50.531895 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:50.535946 containerd[1463]: time="2025-05-08T00:45:50.535859420Z" level=info msg="CreateContainer within sandbox \"d7498f00c1e822b8e690e06cb75d29d65657ee7f90c5246abf7f47fb8a81c2a0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:45:50.539742 containerd[1463]: time="2025-05-08T00:45:50.539681805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8gzv8,Uid:c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7010103a22fea0b6557175f1c58454ec53a46fddbb2f99d6a2047a61dc01a200\"" May 8 00:45:50.541015 kubelet[2504]: E0508 00:45:50.540974 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:50.543025 containerd[1463]: time="2025-05-08T00:45:50.542965477Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 00:45:50.563096 containerd[1463]: time="2025-05-08T00:45:50.562980145Z" level=info msg="CreateContainer within sandbox \"d7498f00c1e822b8e690e06cb75d29d65657ee7f90c5246abf7f47fb8a81c2a0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4e7ca434534e01e21faa9e19cdc7c418da8ca0be7e1ffbba6ba3bdf51f6c6abf\"" May 8 00:45:50.564045 containerd[1463]: time="2025-05-08T00:45:50.563972619Z" level=info msg="StartContainer for \"4e7ca434534e01e21faa9e19cdc7c418da8ca0be7e1ffbba6ba3bdf51f6c6abf\"" May 8 00:45:50.574253 kubelet[2504]: E0508 00:45:50.574196 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:50.575461 containerd[1463]: time="2025-05-08T00:45:50.574965454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xqzs8,Uid:d1a57d54-ed0c-4214-a2db-7de2b0ea2ecf,Namespace:kube-system,Attempt:0,}" May 8 00:45:50.604688 systemd[1]: Started cri-containerd-4e7ca434534e01e21faa9e19cdc7c418da8ca0be7e1ffbba6ba3bdf51f6c6abf.scope - libcontainer container 4e7ca434534e01e21faa9e19cdc7c418da8ca0be7e1ffbba6ba3bdf51f6c6abf. May 8 00:45:50.612246 containerd[1463]: time="2025-05-08T00:45:50.612077292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:45:50.612461 containerd[1463]: time="2025-05-08T00:45:50.612255089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:45:50.612461 containerd[1463]: time="2025-05-08T00:45:50.612318570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:45:50.612655 containerd[1463]: time="2025-05-08T00:45:50.612584384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:45:50.636834 systemd[1]: Started cri-containerd-761862dcfbecac24cc6a3647fe9e7cdbeb478c521b57cb7f616105e02db227b0.scope - libcontainer container 761862dcfbecac24cc6a3647fe9e7cdbeb478c521b57cb7f616105e02db227b0. May 8 00:45:50.644142 containerd[1463]: time="2025-05-08T00:45:50.644066286Z" level=info msg="StartContainer for \"4e7ca434534e01e21faa9e19cdc7c418da8ca0be7e1ffbba6ba3bdf51f6c6abf\" returns successfully" May 8 00:45:50.682478 containerd[1463]: time="2025-05-08T00:45:50.682429029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xqzs8,Uid:d1a57d54-ed0c-4214-a2db-7de2b0ea2ecf,Namespace:kube-system,Attempt:0,} returns sandbox id \"761862dcfbecac24cc6a3647fe9e7cdbeb478c521b57cb7f616105e02db227b0\"" May 8 00:45:50.684356 kubelet[2504]: E0508 00:45:50.684322 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:51.325393 kubelet[2504]: E0508 00:45:51.325357 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:51.335010 kubelet[2504]: I0508 00:45:51.334863 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v45s5" podStartSLOduration=2.334839159 podStartE2EDuration="2.334839159s" podCreationTimestamp="2025-05-08 00:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:45:51.334671401 +0000 UTC m=+8.140027460" watchObservedRunningTime="2025-05-08 00:45:51.334839159 +0000 UTC m=+8.140195208" May 8 00:45:51.424888 systemd[1]: run-containerd-runc-k8s.io-d7498f00c1e822b8e690e06cb75d29d65657ee7f90c5246abf7f47fb8a81c2a0-runc.k5mql2.mount: Deactivated successfully. May 8 00:45:53.150709 update_engine[1445]: I20250508 00:45:53.150564 1445 update_attempter.cc:509] Updating boot flags... May 8 00:45:53.236436 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2878) May 8 00:45:53.298434 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2881) May 8 00:45:53.339444 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2881) May 8 00:45:56.391469 kubelet[2504]: E0508 00:45:56.391401 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:57.184887 kubelet[2504]: E0508 00:45:57.184827 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:58.767733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2911350412.mount: Deactivated successfully. May 8 00:45:58.811293 kubelet[2504]: E0508 00:45:58.811171 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:59.340565 kubelet[2504]: E0508 00:45:59.340530 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:02.620526 containerd[1463]: time="2025-05-08T00:46:02.620455223Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:46:02.621263 containerd[1463]: time="2025-05-08T00:46:02.621191632Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 8 00:46:02.622347 containerd[1463]: time="2025-05-08T00:46:02.622318787Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:46:02.623768 containerd[1463]: time="2025-05-08T00:46:02.623732644Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.08070572s" May 8 00:46:02.623768 containerd[1463]: time="2025-05-08T00:46:02.623765977Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 8 00:46:02.632598 containerd[1463]: time="2025-05-08T00:46:02.632557428Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 8 00:46:02.641903 containerd[1463]: time="2025-05-08T00:46:02.641848269Z" level=info msg="CreateContainer within sandbox \"7010103a22fea0b6557175f1c58454ec53a46fddbb2f99d6a2047a61dc01a200\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:46:02.660051 containerd[1463]: time="2025-05-08T00:46:02.657677333Z" level=info msg="CreateContainer within sandbox \"7010103a22fea0b6557175f1c58454ec53a46fddbb2f99d6a2047a61dc01a200\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"05b0e1e4f2c069056d6c8f9637d69943540bb7bd43b6b63b514215fd6b06ebc2\"" May 8 00:46:02.662883 containerd[1463]: time="2025-05-08T00:46:02.662827216Z" level=info msg="StartContainer for \"05b0e1e4f2c069056d6c8f9637d69943540bb7bd43b6b63b514215fd6b06ebc2\"" May 8 00:46:02.706536 systemd[1]: Started cri-containerd-05b0e1e4f2c069056d6c8f9637d69943540bb7bd43b6b63b514215fd6b06ebc2.scope - libcontainer container 05b0e1e4f2c069056d6c8f9637d69943540bb7bd43b6b63b514215fd6b06ebc2. May 8 00:46:02.737279 containerd[1463]: time="2025-05-08T00:46:02.737235950Z" level=info msg="StartContainer for \"05b0e1e4f2c069056d6c8f9637d69943540bb7bd43b6b63b514215fd6b06ebc2\" returns successfully" May 8 00:46:02.750021 systemd[1]: cri-containerd-05b0e1e4f2c069056d6c8f9637d69943540bb7bd43b6b63b514215fd6b06ebc2.scope: Deactivated successfully. May 8 00:46:03.412493 kubelet[2504]: E0508 00:46:03.412461 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:03.494744 containerd[1463]: time="2025-05-08T00:46:03.494639105Z" level=info msg="shim disconnected" id=05b0e1e4f2c069056d6c8f9637d69943540bb7bd43b6b63b514215fd6b06ebc2 namespace=k8s.io May 8 00:46:03.494744 containerd[1463]: time="2025-05-08T00:46:03.494738934Z" level=warning msg="cleaning up after shim disconnected" id=05b0e1e4f2c069056d6c8f9637d69943540bb7bd43b6b63b514215fd6b06ebc2 namespace=k8s.io May 8 00:46:03.494744 containerd[1463]: time="2025-05-08T00:46:03.494755324Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:46:03.653563 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05b0e1e4f2c069056d6c8f9637d69943540bb7bd43b6b63b514215fd6b06ebc2-rootfs.mount: Deactivated successfully. May 8 00:46:04.415136 kubelet[2504]: E0508 00:46:04.415100 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:04.417193 containerd[1463]: time="2025-05-08T00:46:04.417138314Z" level=info msg="CreateContainer within sandbox \"7010103a22fea0b6557175f1c58454ec53a46fddbb2f99d6a2047a61dc01a200\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:46:04.435531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3446445058.mount: Deactivated successfully. May 8 00:46:04.437733 containerd[1463]: time="2025-05-08T00:46:04.437682765Z" level=info msg="CreateContainer within sandbox \"7010103a22fea0b6557175f1c58454ec53a46fddbb2f99d6a2047a61dc01a200\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"80ca1a137291e5616cd9b91ec5a2bbd6c46512fc7ea6ae9ecd1994b8055e1f06\"" May 8 00:46:04.438264 containerd[1463]: time="2025-05-08T00:46:04.438222002Z" level=info msg="StartContainer for \"80ca1a137291e5616cd9b91ec5a2bbd6c46512fc7ea6ae9ecd1994b8055e1f06\"" May 8 00:46:04.467555 systemd[1]: Started cri-containerd-80ca1a137291e5616cd9b91ec5a2bbd6c46512fc7ea6ae9ecd1994b8055e1f06.scope - libcontainer container 80ca1a137291e5616cd9b91ec5a2bbd6c46512fc7ea6ae9ecd1994b8055e1f06. May 8 00:46:04.495205 containerd[1463]: time="2025-05-08T00:46:04.495152631Z" level=info msg="StartContainer for \"80ca1a137291e5616cd9b91ec5a2bbd6c46512fc7ea6ae9ecd1994b8055e1f06\" returns successfully" May 8 00:46:04.506140 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:46:04.506399 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:46:04.506491 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 8 00:46:04.514008 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:46:04.514250 systemd[1]: cri-containerd-80ca1a137291e5616cd9b91ec5a2bbd6c46512fc7ea6ae9ecd1994b8055e1f06.scope: Deactivated successfully. May 8 00:46:04.536316 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:46:04.548559 containerd[1463]: time="2025-05-08T00:46:04.548495188Z" level=info msg="shim disconnected" id=80ca1a137291e5616cd9b91ec5a2bbd6c46512fc7ea6ae9ecd1994b8055e1f06 namespace=k8s.io May 8 00:46:04.548559 containerd[1463]: time="2025-05-08T00:46:04.548553769Z" level=warning msg="cleaning up after shim disconnected" id=80ca1a137291e5616cd9b91ec5a2bbd6c46512fc7ea6ae9ecd1994b8055e1f06 namespace=k8s.io May 8 00:46:04.548559 containerd[1463]: time="2025-05-08T00:46:04.548563397Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:46:04.653448 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80ca1a137291e5616cd9b91ec5a2bbd6c46512fc7ea6ae9ecd1994b8055e1f06-rootfs.mount: Deactivated successfully. May 8 00:46:05.206513 containerd[1463]: time="2025-05-08T00:46:05.206450539Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:46:05.207201 containerd[1463]: time="2025-05-08T00:46:05.207112666Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 8 00:46:05.208318 containerd[1463]: time="2025-05-08T00:46:05.208280927Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:46:05.209482 containerd[1463]: time="2025-05-08T00:46:05.209457524Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.57685948s" May 8 00:46:05.209534 containerd[1463]: time="2025-05-08T00:46:05.209485156Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 8 00:46:05.211635 containerd[1463]: time="2025-05-08T00:46:05.211520622Z" level=info msg="CreateContainer within sandbox \"761862dcfbecac24cc6a3647fe9e7cdbeb478c521b57cb7f616105e02db227b0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 00:46:05.223209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3191497142.mount: Deactivated successfully. May 8 00:46:05.225451 containerd[1463]: time="2025-05-08T00:46:05.225385966Z" level=info msg="CreateContainer within sandbox \"761862dcfbecac24cc6a3647fe9e7cdbeb478c521b57cb7f616105e02db227b0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7f43368783d59d821e70084cabd7244084576573ad3ce75439205ae3f581c6e9\"" May 8 00:46:05.225832 containerd[1463]: time="2025-05-08T00:46:05.225803744Z" level=info msg="StartContainer for \"7f43368783d59d821e70084cabd7244084576573ad3ce75439205ae3f581c6e9\"" May 8 00:46:05.257545 systemd[1]: Started cri-containerd-7f43368783d59d821e70084cabd7244084576573ad3ce75439205ae3f581c6e9.scope - libcontainer container 7f43368783d59d821e70084cabd7244084576573ad3ce75439205ae3f581c6e9. May 8 00:46:05.284136 containerd[1463]: time="2025-05-08T00:46:05.284067192Z" level=info msg="StartContainer for \"7f43368783d59d821e70084cabd7244084576573ad3ce75439205ae3f581c6e9\" returns successfully" May 8 00:46:05.419975 kubelet[2504]: E0508 00:46:05.419933 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:05.422247 kubelet[2504]: E0508 00:46:05.422226 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:05.423935 containerd[1463]: time="2025-05-08T00:46:05.423898039Z" level=info msg="CreateContainer within sandbox \"7010103a22fea0b6557175f1c58454ec53a46fddbb2f99d6a2047a61dc01a200\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:46:05.429442 kubelet[2504]: I0508 00:46:05.428212 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-xqzs8" podStartSLOduration=0.903241598 podStartE2EDuration="15.428178764s" podCreationTimestamp="2025-05-08 00:45:50 +0000 UTC" firstStartedPulling="2025-05-08 00:45:50.685284028 +0000 UTC m=+7.490640077" lastFinishedPulling="2025-05-08 00:46:05.210221194 +0000 UTC m=+22.015577243" observedRunningTime="2025-05-08 00:46:05.427750637 +0000 UTC m=+22.233106686" watchObservedRunningTime="2025-05-08 00:46:05.428178764 +0000 UTC m=+22.233534823" May 8 00:46:05.558146 containerd[1463]: time="2025-05-08T00:46:05.558016171Z" level=info msg="CreateContainer within sandbox \"7010103a22fea0b6557175f1c58454ec53a46fddbb2f99d6a2047a61dc01a200\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1151df86bb55c728cc62a560f4d34672ba20ca8c0b0d8131c9bfd8667c9968ed\"" May 8 00:46:05.558846 containerd[1463]: time="2025-05-08T00:46:05.558790961Z" level=info msg="StartContainer for \"1151df86bb55c728cc62a560f4d34672ba20ca8c0b0d8131c9bfd8667c9968ed\"" May 8 00:46:05.588552 systemd[1]: Started cri-containerd-1151df86bb55c728cc62a560f4d34672ba20ca8c0b0d8131c9bfd8667c9968ed.scope - libcontainer container 1151df86bb55c728cc62a560f4d34672ba20ca8c0b0d8131c9bfd8667c9968ed. May 8 00:46:05.629513 systemd[1]: cri-containerd-1151df86bb55c728cc62a560f4d34672ba20ca8c0b0d8131c9bfd8667c9968ed.scope: Deactivated successfully. May 8 00:46:05.655187 systemd[1]: run-containerd-runc-k8s.io-7f43368783d59d821e70084cabd7244084576573ad3ce75439205ae3f581c6e9-runc.iuZBpH.mount: Deactivated successfully. May 8 00:46:05.714387 containerd[1463]: time="2025-05-08T00:46:05.714326358Z" level=info msg="StartContainer for \"1151df86bb55c728cc62a560f4d34672ba20ca8c0b0d8131c9bfd8667c9968ed\" returns successfully" May 8 00:46:05.742314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1151df86bb55c728cc62a560f4d34672ba20ca8c0b0d8131c9bfd8667c9968ed-rootfs.mount: Deactivated successfully. May 8 00:46:05.747389 containerd[1463]: time="2025-05-08T00:46:05.747318560Z" level=info msg="shim disconnected" id=1151df86bb55c728cc62a560f4d34672ba20ca8c0b0d8131c9bfd8667c9968ed namespace=k8s.io May 8 00:46:05.747389 containerd[1463]: time="2025-05-08T00:46:05.747378664Z" level=warning msg="cleaning up after shim disconnected" id=1151df86bb55c728cc62a560f4d34672ba20ca8c0b0d8131c9bfd8667c9968ed namespace=k8s.io May 8 00:46:05.747389 containerd[1463]: time="2025-05-08T00:46:05.747387931Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:46:06.461952 kubelet[2504]: E0508 00:46:06.461923 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:06.461952 kubelet[2504]: E0508 00:46:06.461935 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:06.478122 containerd[1463]: time="2025-05-08T00:46:06.478059334Z" level=info msg="CreateContainer within sandbox \"7010103a22fea0b6557175f1c58454ec53a46fddbb2f99d6a2047a61dc01a200\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:46:06.520885 containerd[1463]: time="2025-05-08T00:46:06.520819940Z" level=info msg="CreateContainer within sandbox \"7010103a22fea0b6557175f1c58454ec53a46fddbb2f99d6a2047a61dc01a200\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"47510c8d060c6644cb48ebc5213659afbe2973a4b1119853c544faed3d9f5d20\"" May 8 00:46:06.521461 containerd[1463]: time="2025-05-08T00:46:06.521314411Z" level=info msg="StartContainer for \"47510c8d060c6644cb48ebc5213659afbe2973a4b1119853c544faed3d9f5d20\"" May 8 00:46:06.547550 systemd[1]: Started cri-containerd-47510c8d060c6644cb48ebc5213659afbe2973a4b1119853c544faed3d9f5d20.scope - libcontainer container 47510c8d060c6644cb48ebc5213659afbe2973a4b1119853c544faed3d9f5d20. May 8 00:46:06.574804 systemd[1]: cri-containerd-47510c8d060c6644cb48ebc5213659afbe2973a4b1119853c544faed3d9f5d20.scope: Deactivated successfully. May 8 00:46:06.577611 containerd[1463]: time="2025-05-08T00:46:06.577577412Z" level=info msg="StartContainer for \"47510c8d060c6644cb48ebc5213659afbe2973a4b1119853c544faed3d9f5d20\" returns successfully" May 8 00:46:06.603899 containerd[1463]: time="2025-05-08T00:46:06.603822648Z" level=info msg="shim disconnected" id=47510c8d060c6644cb48ebc5213659afbe2973a4b1119853c544faed3d9f5d20 namespace=k8s.io May 8 00:46:06.604106 containerd[1463]: time="2025-05-08T00:46:06.603903601Z" level=warning msg="cleaning up after shim disconnected" id=47510c8d060c6644cb48ebc5213659afbe2973a4b1119853c544faed3d9f5d20 namespace=k8s.io May 8 00:46:06.604106 containerd[1463]: time="2025-05-08T00:46:06.603915754Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:46:06.654678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47510c8d060c6644cb48ebc5213659afbe2973a4b1119853c544faed3d9f5d20-rootfs.mount: Deactivated successfully. May 8 00:46:07.467109 kubelet[2504]: E0508 00:46:07.467071 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:07.470143 containerd[1463]: time="2025-05-08T00:46:07.470098115Z" level=info msg="CreateContainer within sandbox \"7010103a22fea0b6557175f1c58454ec53a46fddbb2f99d6a2047a61dc01a200\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:46:07.780131 containerd[1463]: time="2025-05-08T00:46:07.779983657Z" level=info msg="CreateContainer within sandbox \"7010103a22fea0b6557175f1c58454ec53a46fddbb2f99d6a2047a61dc01a200\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1998d8658eb62ec054462fd9bd3afc7e7d1db1cd9c468e20c9de304228bbb6e5\"" May 8 00:46:07.780953 containerd[1463]: time="2025-05-08T00:46:07.780883010Z" level=info msg="StartContainer for \"1998d8658eb62ec054462fd9bd3afc7e7d1db1cd9c468e20c9de304228bbb6e5\"" May 8 00:46:07.809561 systemd[1]: Started cri-containerd-1998d8658eb62ec054462fd9bd3afc7e7d1db1cd9c468e20c9de304228bbb6e5.scope - libcontainer container 1998d8658eb62ec054462fd9bd3afc7e7d1db1cd9c468e20c9de304228bbb6e5. May 8 00:46:07.842973 containerd[1463]: time="2025-05-08T00:46:07.842909616Z" level=info msg="StartContainer for \"1998d8658eb62ec054462fd9bd3afc7e7d1db1cd9c468e20c9de304228bbb6e5\" returns successfully" May 8 00:46:08.000042 kubelet[2504]: I0508 00:46:07.999996 2504 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 8 00:46:08.034770 systemd[1]: Created slice kubepods-burstable-podef2a25ba_f4ae_4fe9_89e4_9e04be7f14eb.slice - libcontainer container kubepods-burstable-podef2a25ba_f4ae_4fe9_89e4_9e04be7f14eb.slice. May 8 00:46:08.036007 kubelet[2504]: I0508 00:46:08.035975 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmkmt\" (UniqueName: \"kubernetes.io/projected/ef2a25ba-f4ae-4fe9-89e4-9e04be7f14eb-kube-api-access-kmkmt\") pod \"coredns-668d6bf9bc-bnqm8\" (UID: \"ef2a25ba-f4ae-4fe9-89e4-9e04be7f14eb\") " pod="kube-system/coredns-668d6bf9bc-bnqm8" May 8 00:46:08.036007 kubelet[2504]: I0508 00:46:08.036008 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddb5l\" (UniqueName: \"kubernetes.io/projected/30714c40-bd89-4d16-b985-63e87dacb00b-kube-api-access-ddb5l\") pod \"coredns-668d6bf9bc-7fvtg\" (UID: \"30714c40-bd89-4d16-b985-63e87dacb00b\") " pod="kube-system/coredns-668d6bf9bc-7fvtg" May 8 00:46:08.036211 kubelet[2504]: I0508 00:46:08.036026 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef2a25ba-f4ae-4fe9-89e4-9e04be7f14eb-config-volume\") pod \"coredns-668d6bf9bc-bnqm8\" (UID: \"ef2a25ba-f4ae-4fe9-89e4-9e04be7f14eb\") " pod="kube-system/coredns-668d6bf9bc-bnqm8" May 8 00:46:08.036211 kubelet[2504]: I0508 00:46:08.036046 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30714c40-bd89-4d16-b985-63e87dacb00b-config-volume\") pod \"coredns-668d6bf9bc-7fvtg\" (UID: \"30714c40-bd89-4d16-b985-63e87dacb00b\") " pod="kube-system/coredns-668d6bf9bc-7fvtg" May 8 00:46:08.043050 systemd[1]: Created slice kubepods-burstable-pod30714c40_bd89_4d16_b985_63e87dacb00b.slice - libcontainer container kubepods-burstable-pod30714c40_bd89_4d16_b985_63e87dacb00b.slice. May 8 00:46:08.340101 kubelet[2504]: E0508 00:46:08.339986 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:08.341006 containerd[1463]: time="2025-05-08T00:46:08.340892206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bnqm8,Uid:ef2a25ba-f4ae-4fe9-89e4-9e04be7f14eb,Namespace:kube-system,Attempt:0,}" May 8 00:46:08.346533 kubelet[2504]: E0508 00:46:08.346496 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:08.347511 containerd[1463]: time="2025-05-08T00:46:08.347131633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7fvtg,Uid:30714c40-bd89-4d16-b985-63e87dacb00b,Namespace:kube-system,Attempt:0,}" May 8 00:46:08.397661 systemd[1]: Started sshd@7-10.0.0.130:22-10.0.0.1:49612.service - OpenSSH per-connection server daemon (10.0.0.1:49612). May 8 00:46:08.448024 sshd[3323]: Accepted publickey for core from 10.0.0.1 port 49612 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:46:08.449950 sshd[3323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:46:08.455512 systemd-logind[1444]: New session 8 of user core. May 8 00:46:08.459528 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:46:08.472557 kubelet[2504]: E0508 00:46:08.472470 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:08.599378 sshd[3323]: pam_unix(sshd:session): session closed for user core May 8 00:46:08.603654 systemd[1]: sshd@7-10.0.0.130:22-10.0.0.1:49612.service: Deactivated successfully. May 8 00:46:08.605797 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:46:08.606609 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. May 8 00:46:08.607487 systemd-logind[1444]: Removed session 8. May 8 00:46:09.474751 kubelet[2504]: E0508 00:46:09.474692 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:10.066177 systemd-networkd[1400]: cilium_host: Link UP May 8 00:46:10.066428 systemd-networkd[1400]: cilium_net: Link UP May 8 00:46:10.067034 systemd-networkd[1400]: cilium_net: Gained carrier May 8 00:46:10.067289 systemd-networkd[1400]: cilium_host: Gained carrier May 8 00:46:10.170219 systemd-networkd[1400]: cilium_vxlan: Link UP May 8 00:46:10.170229 systemd-networkd[1400]: cilium_vxlan: Gained carrier May 8 00:46:10.390443 kernel: NET: Registered PF_ALG protocol family May 8 00:46:10.475917 kubelet[2504]: E0508 00:46:10.475856 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:10.702651 systemd-networkd[1400]: cilium_host: Gained IPv6LL May 8 00:46:10.831521 systemd-networkd[1400]: cilium_net: Gained IPv6LL May 8 00:46:11.035751 systemd-networkd[1400]: lxc_health: Link UP May 8 00:46:11.045640 systemd-networkd[1400]: lxc_health: Gained carrier May 8 00:46:11.439622 systemd-networkd[1400]: lxcab3cf542427e: Link UP May 8 00:46:11.447647 kernel: eth0: renamed from tmp2857a May 8 00:46:11.452081 systemd-networkd[1400]: lxcdac95757851a: Link UP May 8 00:46:11.462449 kernel: eth0: renamed from tmpd046b May 8 00:46:11.467976 systemd-networkd[1400]: lxcab3cf542427e: Gained carrier May 8 00:46:11.470622 systemd-networkd[1400]: lxcdac95757851a: Gained carrier May 8 00:46:11.484116 kubelet[2504]: E0508 00:46:11.482813 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:11.853536 systemd-networkd[1400]: cilium_vxlan: Gained IPv6LL May 8 00:46:12.173611 systemd-networkd[1400]: lxc_health: Gained IPv6LL May 8 00:46:12.483130 kubelet[2504]: E0508 00:46:12.483096 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:12.533773 kubelet[2504]: I0508 00:46:12.533107 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8gzv8" podStartSLOduration=11.44257924 podStartE2EDuration="23.533091752s" podCreationTimestamp="2025-05-08 00:45:49 +0000 UTC" firstStartedPulling="2025-05-08 00:45:50.541730022 +0000 UTC m=+7.347086071" lastFinishedPulling="2025-05-08 00:46:02.632242534 +0000 UTC m=+19.437598583" observedRunningTime="2025-05-08 00:46:08.487231628 +0000 UTC m=+25.292587677" watchObservedRunningTime="2025-05-08 00:46:12.533091752 +0000 UTC m=+29.338447801" May 8 00:46:12.813630 systemd-networkd[1400]: lxcdac95757851a: Gained IPv6LL May 8 00:46:13.453682 systemd-networkd[1400]: lxcab3cf542427e: Gained IPv6LL May 8 00:46:13.485338 kubelet[2504]: E0508 00:46:13.485293 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:13.613206 systemd[1]: Started sshd@8-10.0.0.130:22-10.0.0.1:49616.service - OpenSSH per-connection server daemon (10.0.0.1:49616). May 8 00:46:13.654405 sshd[3744]: Accepted publickey for core from 10.0.0.1 port 49616 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:46:13.656303 sshd[3744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:46:13.660535 systemd-logind[1444]: New session 9 of user core. May 8 00:46:13.670614 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:46:13.793057 sshd[3744]: pam_unix(sshd:session): session closed for user core May 8 00:46:13.797494 systemd[1]: sshd@8-10.0.0.130:22-10.0.0.1:49616.service: Deactivated successfully. May 8 00:46:13.799988 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:46:13.800772 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. May 8 00:46:13.801775 systemd-logind[1444]: Removed session 9. May 8 00:46:15.028722 containerd[1463]: time="2025-05-08T00:46:15.027955250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:46:15.028722 containerd[1463]: time="2025-05-08T00:46:15.028685313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:46:15.028722 containerd[1463]: time="2025-05-08T00:46:15.028697005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:46:15.029263 containerd[1463]: time="2025-05-08T00:46:15.028768419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:46:15.058910 containerd[1463]: time="2025-05-08T00:46:15.058800033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:46:15.058910 containerd[1463]: time="2025-05-08T00:46:15.058873080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:46:15.058910 containerd[1463]: time="2025-05-08T00:46:15.058884562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:46:15.059068 containerd[1463]: time="2025-05-08T00:46:15.058951938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:46:15.064583 systemd[1]: Started cri-containerd-2857a128fae4777a8844812002836db0d3b97c5d1eb748dba887b4cefd0db100.scope - libcontainer container 2857a128fae4777a8844812002836db0d3b97c5d1eb748dba887b4cefd0db100. May 8 00:46:15.083553 systemd[1]: Started cri-containerd-d046b7142161c279146cb7817120b8d4c957a355f17e11eaa11bf2d8c33ed1b2.scope - libcontainer container d046b7142161c279146cb7817120b8d4c957a355f17e11eaa11bf2d8c33ed1b2. May 8 00:46:15.087618 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:46:15.097854 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:46:15.123485 containerd[1463]: time="2025-05-08T00:46:15.123396073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bnqm8,Uid:ef2a25ba-f4ae-4fe9-89e4-9e04be7f14eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"2857a128fae4777a8844812002836db0d3b97c5d1eb748dba887b4cefd0db100\"" May 8 00:46:15.125197 kubelet[2504]: E0508 00:46:15.124510 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:15.130881 containerd[1463]: time="2025-05-08T00:46:15.130833888Z" level=info msg="CreateContainer within sandbox \"2857a128fae4777a8844812002836db0d3b97c5d1eb748dba887b4cefd0db100\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:46:15.133368 containerd[1463]: time="2025-05-08T00:46:15.133290707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7fvtg,Uid:30714c40-bd89-4d16-b985-63e87dacb00b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d046b7142161c279146cb7817120b8d4c957a355f17e11eaa11bf2d8c33ed1b2\"" May 8 00:46:15.135382 kubelet[2504]: E0508 00:46:15.135335 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:15.137189 containerd[1463]: time="2025-05-08T00:46:15.137157868Z" level=info msg="CreateContainer within sandbox \"d046b7142161c279146cb7817120b8d4c957a355f17e11eaa11bf2d8c33ed1b2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:46:15.398379 containerd[1463]: time="2025-05-08T00:46:15.398273214Z" level=info msg="CreateContainer within sandbox \"2857a128fae4777a8844812002836db0d3b97c5d1eb748dba887b4cefd0db100\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4dfb5fd551a48a832c6fcd141c2f0c28b8f33789a7155078fb12deb5bf35c3d9\"" May 8 00:46:15.399086 containerd[1463]: time="2025-05-08T00:46:15.398671112Z" level=info msg="StartContainer for \"4dfb5fd551a48a832c6fcd141c2f0c28b8f33789a7155078fb12deb5bf35c3d9\"" May 8 00:46:15.405897 containerd[1463]: time="2025-05-08T00:46:15.405830964Z" level=info msg="CreateContainer within sandbox \"d046b7142161c279146cb7817120b8d4c957a355f17e11eaa11bf2d8c33ed1b2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"97d5f61209855e1ac49b5942057fe8c3ed42d89846247e77662ef950db06f86d\"" May 8 00:46:15.406926 containerd[1463]: time="2025-05-08T00:46:15.406904803Z" level=info msg="StartContainer for \"97d5f61209855e1ac49b5942057fe8c3ed42d89846247e77662ef950db06f86d\"" May 8 00:46:15.428575 systemd[1]: Started cri-containerd-4dfb5fd551a48a832c6fcd141c2f0c28b8f33789a7155078fb12deb5bf35c3d9.scope - libcontainer container 4dfb5fd551a48a832c6fcd141c2f0c28b8f33789a7155078fb12deb5bf35c3d9. May 8 00:46:15.431565 systemd[1]: Started cri-containerd-97d5f61209855e1ac49b5942057fe8c3ed42d89846247e77662ef950db06f86d.scope - libcontainer container 97d5f61209855e1ac49b5942057fe8c3ed42d89846247e77662ef950db06f86d. May 8 00:46:15.587088 containerd[1463]: time="2025-05-08T00:46:15.586932614Z" level=info msg="StartContainer for \"4dfb5fd551a48a832c6fcd141c2f0c28b8f33789a7155078fb12deb5bf35c3d9\" returns successfully" May 8 00:46:15.587088 containerd[1463]: time="2025-05-08T00:46:15.587032922Z" level=info msg="StartContainer for \"97d5f61209855e1ac49b5942057fe8c3ed42d89846247e77662ef950db06f86d\" returns successfully" May 8 00:46:15.592775 kubelet[2504]: E0508 00:46:15.592560 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:16.594808 kubelet[2504]: E0508 00:46:16.594656 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:16.595560 kubelet[2504]: E0508 00:46:16.595525 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:16.629545 kubelet[2504]: I0508 00:46:16.629451 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7fvtg" podStartSLOduration=26.629426978 podStartE2EDuration="26.629426978s" podCreationTimestamp="2025-05-08 00:45:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:46:15.616504473 +0000 UTC m=+32.421860522" watchObservedRunningTime="2025-05-08 00:46:16.629426978 +0000 UTC m=+33.434783017" May 8 00:46:16.671842 kubelet[2504]: I0508 00:46:16.671743 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bnqm8" podStartSLOduration=26.671722358 podStartE2EDuration="26.671722358s" podCreationTimestamp="2025-05-08 00:45:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:46:16.631908203 +0000 UTC m=+33.437264252" watchObservedRunningTime="2025-05-08 00:46:16.671722358 +0000 UTC m=+33.477078407" May 8 00:46:17.596307 kubelet[2504]: E0508 00:46:17.596270 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:18.804465 systemd[1]: Started sshd@9-10.0.0.130:22-10.0.0.1:52304.service - OpenSSH per-connection server daemon (10.0.0.1:52304). May 8 00:46:18.846331 sshd[3933]: Accepted publickey for core from 10.0.0.1 port 52304 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:46:18.847907 sshd[3933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:46:18.851642 systemd-logind[1444]: New session 10 of user core. May 8 00:46:18.857639 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:46:18.983967 sshd[3933]: pam_unix(sshd:session): session closed for user core May 8 00:46:18.987785 systemd[1]: sshd@9-10.0.0.130:22-10.0.0.1:52304.service: Deactivated successfully. May 8 00:46:18.989726 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:46:18.990263 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit. May 8 00:46:18.991316 systemd-logind[1444]: Removed session 10. May 8 00:46:24.000309 systemd[1]: Started sshd@10-10.0.0.130:22-10.0.0.1:52312.service - OpenSSH per-connection server daemon (10.0.0.1:52312). May 8 00:46:24.037191 sshd[3955]: Accepted publickey for core from 10.0.0.1 port 52312 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:46:24.039236 sshd[3955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:46:24.043369 systemd-logind[1444]: New session 11 of user core. May 8 00:46:24.052549 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:46:24.169545 sshd[3955]: pam_unix(sshd:session): session closed for user core May 8 00:46:24.173770 systemd[1]: sshd@10-10.0.0.130:22-10.0.0.1:52312.service: Deactivated successfully. May 8 00:46:24.175627 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:46:24.176329 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit. May 8 00:46:24.177270 systemd-logind[1444]: Removed session 11. May 8 00:46:26.595974 kubelet[2504]: E0508 00:46:26.595936 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:26.619436 kubelet[2504]: E0508 00:46:26.617056 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:29.189550 systemd[1]: Started sshd@11-10.0.0.130:22-10.0.0.1:39394.service - OpenSSH per-connection server daemon (10.0.0.1:39394). May 8 00:46:29.225604 sshd[3974]: Accepted publickey for core from 10.0.0.1 port 39394 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:46:29.227054 sshd[3974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:46:29.230897 systemd-logind[1444]: New session 12 of user core. May 8 00:46:29.238602 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:46:29.351209 sshd[3974]: pam_unix(sshd:session): session closed for user core May 8 00:46:29.360242 systemd[1]: sshd@11-10.0.0.130:22-10.0.0.1:39394.service: Deactivated successfully. May 8 00:46:29.361998 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:46:29.363588 systemd-logind[1444]: Session 12 logged out. Waiting for processes to exit. May 8 00:46:29.373650 systemd[1]: Started sshd@12-10.0.0.130:22-10.0.0.1:39404.service - OpenSSH per-connection server daemon (10.0.0.1:39404). May 8 00:46:29.374569 systemd-logind[1444]: Removed session 12. May 8 00:46:29.405759 sshd[3989]: Accepted publickey for core from 10.0.0.1 port 39404 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:46:29.407231 sshd[3989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:46:29.411239 systemd-logind[1444]: New session 13 of user core. May 8 00:46:29.421540 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:46:29.583900 sshd[3989]: pam_unix(sshd:session): session closed for user core May 8 00:46:29.594066 systemd[1]: sshd@12-10.0.0.130:22-10.0.0.1:39404.service: Deactivated successfully. May 8 00:46:29.595829 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:46:29.597524 systemd-logind[1444]: Session 13 logged out. Waiting for processes to exit. May 8 00:46:29.604778 systemd[1]: Started sshd@13-10.0.0.130:22-10.0.0.1:39420.service - OpenSSH per-connection server daemon (10.0.0.1:39420). May 8 00:46:29.605842 systemd-logind[1444]: Removed session 13. May 8 00:46:29.635315 sshd[4001]: Accepted publickey for core from 10.0.0.1 port 39420 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:46:29.636908 sshd[4001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:46:29.640737 systemd-logind[1444]: New session 14 of user core. May 8 00:46:29.650584 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:46:29.768084 sshd[4001]: pam_unix(sshd:session): session closed for user core May 8 00:46:29.772373 systemd[1]: sshd@13-10.0.0.130:22-10.0.0.1:39420.service: Deactivated successfully. May 8 00:46:29.774364 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:46:29.774954 systemd-logind[1444]: Session 14 logged out. Waiting for processes to exit. May 8 00:46:29.775789 systemd-logind[1444]: Removed session 14. May 8 00:46:34.784299 systemd[1]: Started sshd@14-10.0.0.130:22-10.0.0.1:39434.service - OpenSSH per-connection server daemon (10.0.0.1:39434). May 8 00:46:34.836420 sshd[4018]: Accepted publickey for core from 10.0.0.1 port 39434 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:46:34.838089 sshd[4018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:46:34.842702 systemd-logind[1444]: New session 15 of user core. May 8 00:46:34.857571 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:46:34.980672 sshd[4018]: pam_unix(sshd:session): session closed for user core May 8 00:46:34.985543 systemd[1]: sshd@14-10.0.0.130:22-10.0.0.1:39434.service: Deactivated successfully. May 8 00:46:34.988332 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:46:34.989160 systemd-logind[1444]: Session 15 logged out. Waiting for processes to exit. May 8 00:46:34.990124 systemd-logind[1444]: Removed session 15. May 8 00:46:39.992105 systemd[1]: Started sshd@15-10.0.0.130:22-10.0.0.1:58054.service - OpenSSH per-connection server daemon (10.0.0.1:58054). May 8 00:46:40.027089 sshd[4032]: Accepted publickey for core from 10.0.0.1 port 58054 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:46:40.028816 sshd[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:46:40.032642 systemd-logind[1444]: New session 16 of user core. May 8 00:46:40.039570 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:46:40.143232 sshd[4032]: pam_unix(sshd:session): session closed for user core May 8 00:46:40.154960 systemd[1]: sshd@15-10.0.0.130:22-10.0.0.1:58054.service: Deactivated successfully. May 8 00:46:40.156755 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:46:40.158070 systemd-logind[1444]: Session 16 logged out. Waiting for processes to exit. May 8 00:46:40.168102 systemd[1]: Started sshd@16-10.0.0.130:22-10.0.0.1:58058.service - OpenSSH per-connection server daemon (10.0.0.1:58058). May 8 00:46:40.169215 systemd-logind[1444]: Removed session 16. May 8 00:46:40.198991 sshd[4046]: Accepted publickey for core from 10.0.0.1 port 58058 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:46:40.200382 sshd[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:46:40.203879 systemd-logind[1444]: New session 17 of user core. May 8 00:46:40.215513 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:46:40.433996 sshd[4046]: pam_unix(sshd:session): session closed for user core May 8 00:46:40.443984 systemd[1]: sshd@16-10.0.0.130:22-10.0.0.1:58058.service: Deactivated successfully. May 8 00:46:40.445630 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:46:40.446870 systemd-logind[1444]: Session 17 logged out. Waiting for processes to exit. May 8 00:46:40.448013 systemd[1]: Started sshd@17-10.0.0.130:22-10.0.0.1:58064.service - OpenSSH per-connection server daemon (10.0.0.1:58064). May 8 00:46:40.448946 systemd-logind[1444]: Removed session 17. May 8 00:46:40.486513 sshd[4059]: Accepted publickey for core from 10.0.0.1 port 58064 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:46:40.487890 sshd[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:46:40.491573 systemd-logind[1444]: New session 18 of user core. May 8 00:46:40.506534 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:46:41.218643 sshd[4059]: pam_unix(sshd:session): session closed for user core May 8 00:46:41.228249 systemd[1]: sshd@17-10.0.0.130:22-10.0.0.1:58064.service: Deactivated successfully. May 8 00:46:41.231936 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:46:41.233153 systemd-logind[1444]: Session 18 logged out. Waiting for processes to exit. May 8 00:46:41.242821 systemd[1]: Started sshd@18-10.0.0.130:22-10.0.0.1:58068.service - OpenSSH per-connection server daemon (10.0.0.1:58068). May 8 00:46:41.244848 systemd-logind[1444]: Removed session 18. May 8 00:46:41.274839 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 58068 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:46:41.276265 sshd[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:46:41.280165 systemd-logind[1444]: New session 19 of user core. May 8 00:46:41.288545 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:46:41.509698 sshd[4080]: pam_unix(sshd:session): session closed for user core May 8 00:46:41.518634 systemd[1]: sshd@18-10.0.0.130:22-10.0.0.1:58068.service: Deactivated successfully. May 8 00:46:41.520756 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:46:41.522308 systemd-logind[1444]: Session 19 logged out. Waiting for processes to exit. May 8 00:46:41.533695 systemd[1]: Started sshd@19-10.0.0.130:22-10.0.0.1:58070.service - OpenSSH per-connection server daemon (10.0.0.1:58070). May 8 00:46:41.534673 systemd-logind[1444]: Removed session 19. May 8 00:46:41.565421 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 58070 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:46:41.566792 sshd[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:46:41.570426 systemd-logind[1444]: New session 20 of user core. May 8 00:46:41.580524 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:46:41.684898 sshd[4093]: pam_unix(sshd:session): session closed for user core May 8 00:46:41.688383 systemd[1]: sshd@19-10.0.0.130:22-10.0.0.1:58070.service: Deactivated successfully. May 8 00:46:41.690126 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:46:41.690679 systemd-logind[1444]: Session 20 logged out. Waiting for processes to exit. May 8 00:46:41.691504 systemd-logind[1444]: Removed session 20. May 8 00:46:46.700367 systemd[1]: Started sshd@20-10.0.0.130:22-10.0.0.1:36054.service - OpenSSH per-connection server daemon (10.0.0.1:36054). May 8 00:46:46.737716 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 36054 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:46:46.739272 sshd[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:46:46.742992 systemd-logind[1444]: New session 21 of user core. May 8 00:46:46.759562 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:46:46.867643 sshd[4109]: pam_unix(sshd:session): session closed for user core May 8 00:46:46.871595 systemd[1]: sshd@20-10.0.0.130:22-10.0.0.1:36054.service: Deactivated successfully. May 8 00:46:46.874229 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:46:46.874955 systemd-logind[1444]: Session 21 logged out. Waiting for processes to exit. May 8 00:46:46.875889 systemd-logind[1444]: Removed session 21. May 8 00:46:51.879645 systemd[1]: Started sshd@21-10.0.0.130:22-10.0.0.1:36058.service - OpenSSH per-connection server daemon (10.0.0.1:36058). May 8 00:46:51.917027 sshd[4127]: Accepted publickey for core from 10.0.0.1 port 36058 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:46:51.918918 sshd[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:46:51.923355 systemd-logind[1444]: New session 22 of user core. May 8 00:46:51.932596 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 00:46:52.036954 sshd[4127]: pam_unix(sshd:session): session closed for user core May 8 00:46:52.040744 systemd[1]: sshd@21-10.0.0.130:22-10.0.0.1:36058.service: Deactivated successfully. May 8 00:46:52.042500 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:46:52.043047 systemd-logind[1444]: Session 22 logged out. Waiting for processes to exit. May 8 00:46:52.043847 systemd-logind[1444]: Removed session 22. May 8 00:46:57.048363 systemd[1]: Started sshd@22-10.0.0.130:22-10.0.0.1:56308.service - OpenSSH per-connection server daemon (10.0.0.1:56308). May 8 00:46:57.084206 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 56308 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:46:57.085930 sshd[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:46:57.090380 systemd-logind[1444]: New session 23 of user core. May 8 00:46:57.099555 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 00:46:57.210812 sshd[4142]: pam_unix(sshd:session): session closed for user core May 8 00:46:57.214980 systemd[1]: sshd@22-10.0.0.130:22-10.0.0.1:56308.service: Deactivated successfully. May 8 00:46:57.217012 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:46:57.217634 systemd-logind[1444]: Session 23 logged out. Waiting for processes to exit. May 8 00:46:57.218550 systemd-logind[1444]: Removed session 23. May 8 00:46:58.299623 kubelet[2504]: E0508 00:46:58.299580 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:02.222787 systemd[1]: Started sshd@23-10.0.0.130:22-10.0.0.1:56320.service - OpenSSH per-connection server daemon (10.0.0.1:56320). May 8 00:47:02.259115 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 56320 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:47:02.260666 sshd[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:47:02.264465 systemd-logind[1444]: New session 24 of user core. May 8 00:47:02.272573 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 00:47:02.381720 sshd[4156]: pam_unix(sshd:session): session closed for user core May 8 00:47:02.391062 systemd[1]: sshd@23-10.0.0.130:22-10.0.0.1:56320.service: Deactivated successfully. May 8 00:47:02.392707 systemd[1]: session-24.scope: Deactivated successfully. May 8 00:47:02.393950 systemd-logind[1444]: Session 24 logged out. Waiting for processes to exit. May 8 00:47:02.395658 systemd[1]: Started sshd@24-10.0.0.130:22-10.0.0.1:56326.service - OpenSSH per-connection server daemon (10.0.0.1:56326). May 8 00:47:02.396436 systemd-logind[1444]: Removed session 24. May 8 00:47:02.447065 sshd[4170]: Accepted publickey for core from 10.0.0.1 port 56326 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:47:02.448565 sshd[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:47:02.452590 systemd-logind[1444]: New session 25 of user core. May 8 00:47:02.462541 systemd[1]: Started session-25.scope - Session 25 of User core. May 8 00:47:03.889354 containerd[1463]: time="2025-05-08T00:47:03.889297135Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:47:03.899279 containerd[1463]: time="2025-05-08T00:47:03.899228051Z" level=info msg="StopContainer for \"7f43368783d59d821e70084cabd7244084576573ad3ce75439205ae3f581c6e9\" with timeout 30 (s)" May 8 00:47:03.899939 containerd[1463]: time="2025-05-08T00:47:03.899692396Z" level=info msg="StopContainer for \"1998d8658eb62ec054462fd9bd3afc7e7d1db1cd9c468e20c9de304228bbb6e5\" with timeout 2 (s)" May 8 00:47:03.900190 containerd[1463]: time="2025-05-08T00:47:03.900165078Z" level=info msg="Stop container \"1998d8658eb62ec054462fd9bd3afc7e7d1db1cd9c468e20c9de304228bbb6e5\" with signal terminated" May 8 00:47:03.904029 containerd[1463]: time="2025-05-08T00:47:03.903979635Z" level=info msg="Stop container \"7f43368783d59d821e70084cabd7244084576573ad3ce75439205ae3f581c6e9\" with signal terminated" May 8 00:47:03.908883 systemd-networkd[1400]: lxc_health: Link DOWN May 8 00:47:03.908897 systemd-networkd[1400]: lxc_health: Lost carrier May 8 00:47:03.915005 systemd[1]: cri-containerd-7f43368783d59d821e70084cabd7244084576573ad3ce75439205ae3f581c6e9.scope: Deactivated successfully. May 8 00:47:03.931805 systemd[1]: cri-containerd-1998d8658eb62ec054462fd9bd3afc7e7d1db1cd9c468e20c9de304228bbb6e5.scope: Deactivated successfully. May 8 00:47:03.932121 systemd[1]: cri-containerd-1998d8658eb62ec054462fd9bd3afc7e7d1db1cd9c468e20c9de304228bbb6e5.scope: Consumed 6.868s CPU time. May 8 00:47:03.939550 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f43368783d59d821e70084cabd7244084576573ad3ce75439205ae3f581c6e9-rootfs.mount: Deactivated successfully. May 8 00:47:03.965630 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1998d8658eb62ec054462fd9bd3afc7e7d1db1cd9c468e20c9de304228bbb6e5-rootfs.mount: Deactivated successfully. May 8 00:47:04.007935 containerd[1463]: time="2025-05-08T00:47:04.007856919Z" level=info msg="shim disconnected" id=7f43368783d59d821e70084cabd7244084576573ad3ce75439205ae3f581c6e9 namespace=k8s.io May 8 00:47:04.008144 containerd[1463]: time="2025-05-08T00:47:04.007935719Z" level=warning msg="cleaning up after shim disconnected" id=7f43368783d59d821e70084cabd7244084576573ad3ce75439205ae3f581c6e9 namespace=k8s.io May 8 00:47:04.008144 containerd[1463]: time="2025-05-08T00:47:04.007947802Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:47:04.033877 containerd[1463]: time="2025-05-08T00:47:04.033816401Z" level=info msg="shim disconnected" id=1998d8658eb62ec054462fd9bd3afc7e7d1db1cd9c468e20c9de304228bbb6e5 namespace=k8s.io May 8 00:47:04.033877 containerd[1463]: time="2025-05-08T00:47:04.033865856Z" level=warning msg="cleaning up after shim disconnected" id=1998d8658eb62ec054462fd9bd3afc7e7d1db1cd9c468e20c9de304228bbb6e5 namespace=k8s.io May 8 00:47:04.033877 containerd[1463]: time="2025-05-08T00:47:04.033874903Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:47:04.071542 containerd[1463]: time="2025-05-08T00:47:04.071319556Z" level=info msg="StopContainer for \"7f43368783d59d821e70084cabd7244084576573ad3ce75439205ae3f581c6e9\" returns successfully" May 8 00:47:04.075220 containerd[1463]: time="2025-05-08T00:47:04.075181409Z" level=info msg="StopPodSandbox for \"761862dcfbecac24cc6a3647fe9e7cdbeb478c521b57cb7f616105e02db227b0\"" May 8 00:47:04.075266 containerd[1463]: time="2025-05-08T00:47:04.075224813Z" level=info msg="Container to stop \"7f43368783d59d821e70084cabd7244084576573ad3ce75439205ae3f581c6e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:47:04.077162 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-761862dcfbecac24cc6a3647fe9e7cdbeb478c521b57cb7f616105e02db227b0-shm.mount: Deactivated successfully. May 8 00:47:04.082235 systemd[1]: cri-containerd-761862dcfbecac24cc6a3647fe9e7cdbeb478c521b57cb7f616105e02db227b0.scope: Deactivated successfully. May 8 00:47:04.103722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-761862dcfbecac24cc6a3647fe9e7cdbeb478c521b57cb7f616105e02db227b0-rootfs.mount: Deactivated successfully. May 8 00:47:04.112470 containerd[1463]: time="2025-05-08T00:47:04.112427324Z" level=info msg="StopContainer for \"1998d8658eb62ec054462fd9bd3afc7e7d1db1cd9c468e20c9de304228bbb6e5\" returns successfully" May 8 00:47:04.113178 containerd[1463]: time="2025-05-08T00:47:04.113030334Z" level=info msg="StopPodSandbox for \"7010103a22fea0b6557175f1c58454ec53a46fddbb2f99d6a2047a61dc01a200\"" May 8 00:47:04.113178 containerd[1463]: time="2025-05-08T00:47:04.113071031Z" level=info msg="Container to stop \"05b0e1e4f2c069056d6c8f9637d69943540bb7bd43b6b63b514215fd6b06ebc2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:47:04.113178 containerd[1463]: time="2025-05-08T00:47:04.113087463Z" level=info msg="Container to stop \"47510c8d060c6644cb48ebc5213659afbe2973a4b1119853c544faed3d9f5d20\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:47:04.113178 containerd[1463]: time="2025-05-08T00:47:04.113100568Z" level=info msg="Container to stop \"1151df86bb55c728cc62a560f4d34672ba20ca8c0b0d8131c9bfd8667c9968ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:47:04.113178 containerd[1463]: time="2025-05-08T00:47:04.113112361Z" level=info msg="Container to stop \"1998d8658eb62ec054462fd9bd3afc7e7d1db1cd9c468e20c9de304228bbb6e5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:47:04.113178 containerd[1463]: time="2025-05-08T00:47:04.113123953Z" level=info msg="Container to stop \"80ca1a137291e5616cd9b91ec5a2bbd6c46512fc7ea6ae9ecd1994b8055e1f06\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:47:04.119538 systemd[1]: cri-containerd-7010103a22fea0b6557175f1c58454ec53a46fddbb2f99d6a2047a61dc01a200.scope: Deactivated successfully. May 8 00:47:04.169787 containerd[1463]: time="2025-05-08T00:47:04.169637969Z" level=info msg="shim disconnected" id=761862dcfbecac24cc6a3647fe9e7cdbeb478c521b57cb7f616105e02db227b0 namespace=k8s.io May 8 00:47:04.170762 containerd[1463]: time="2025-05-08T00:47:04.170222724Z" level=warning msg="cleaning up after shim disconnected" id=761862dcfbecac24cc6a3647fe9e7cdbeb478c521b57cb7f616105e02db227b0 namespace=k8s.io May 8 00:47:04.170762 containerd[1463]: time="2025-05-08T00:47:04.170272659Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:47:04.187179 containerd[1463]: time="2025-05-08T00:47:04.186976952Z" level=info msg="shim disconnected" id=7010103a22fea0b6557175f1c58454ec53a46fddbb2f99d6a2047a61dc01a200 namespace=k8s.io May 8 00:47:04.187179 containerd[1463]: time="2025-05-08T00:47:04.187047026Z" level=warning msg="cleaning up after shim disconnected" id=7010103a22fea0b6557175f1c58454ec53a46fddbb2f99d6a2047a61dc01a200 namespace=k8s.io May 8 00:47:04.187179 containerd[1463]: time="2025-05-08T00:47:04.187058298Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:47:04.196861 containerd[1463]: time="2025-05-08T00:47:04.196780588Z" level=info msg="TearDown network for sandbox \"761862dcfbecac24cc6a3647fe9e7cdbeb478c521b57cb7f616105e02db227b0\" successfully" May 8 00:47:04.196861 containerd[1463]: time="2025-05-08T00:47:04.196840251Z" level=info msg="StopPodSandbox for \"761862dcfbecac24cc6a3647fe9e7cdbeb478c521b57cb7f616105e02db227b0\" returns successfully" May 8 00:47:04.203278 containerd[1463]: time="2025-05-08T00:47:04.203233863Z" level=info msg="TearDown network for sandbox \"7010103a22fea0b6557175f1c58454ec53a46fddbb2f99d6a2047a61dc01a200\" successfully" May 8 00:47:04.203278 containerd[1463]: time="2025-05-08T00:47:04.203266335Z" level=info msg="StopPodSandbox for \"7010103a22fea0b6557175f1c58454ec53a46fddbb2f99d6a2047a61dc01a200\" returns successfully" May 8 00:47:04.266861 kubelet[2504]: I0508 00:47:04.266817 2504 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-xtables-lock\") pod \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " May 8 00:47:04.266861 kubelet[2504]: I0508 00:47:04.266865 2504 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-cilium-config-path\") pod \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " May 8 00:47:04.267493 kubelet[2504]: I0508 00:47:04.266885 2504 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-bpf-maps\") pod \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " May 8 00:47:04.267493 kubelet[2504]: I0508 00:47:04.266901 2504 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7txp\" (UniqueName: \"kubernetes.io/projected/d1a57d54-ed0c-4214-a2db-7de2b0ea2ecf-kube-api-access-t7txp\") pod \"d1a57d54-ed0c-4214-a2db-7de2b0ea2ecf\" (UID: \"d1a57d54-ed0c-4214-a2db-7de2b0ea2ecf\") " May 8 00:47:04.267493 kubelet[2504]: I0508 00:47:04.266915 2504 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d1a57d54-ed0c-4214-a2db-7de2b0ea2ecf-cilium-config-path\") pod \"d1a57d54-ed0c-4214-a2db-7de2b0ea2ecf\" (UID: \"d1a57d54-ed0c-4214-a2db-7de2b0ea2ecf\") " May 8 00:47:04.267493 kubelet[2504]: I0508 00:47:04.266930 2504 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-clustermesh-secrets\") pod \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " May 8 00:47:04.267493 kubelet[2504]: I0508 00:47:04.266944 2504 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-lib-modules\") pod \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " May 8 00:47:04.267493 kubelet[2504]: I0508 00:47:04.266958 2504 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-hostproc\") pod \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " May 8 00:47:04.267644 kubelet[2504]: I0508 00:47:04.266937 2504 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f" (UID: "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:47:04.267644 kubelet[2504]: I0508 00:47:04.267004 2504 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f" (UID: "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:47:04.267644 kubelet[2504]: I0508 00:47:04.266973 2504 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-host-proc-sys-kernel\") pod \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " May 8 00:47:04.267644 kubelet[2504]: I0508 00:47:04.267052 2504 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-cilium-run\") pod \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " May 8 00:47:04.267644 kubelet[2504]: I0508 00:47:04.267075 2504 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-hubble-tls\") pod \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " May 8 00:47:04.267765 kubelet[2504]: I0508 00:47:04.267090 2504 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-host-proc-sys-net\") pod \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " May 8 00:47:04.267765 kubelet[2504]: I0508 00:47:04.267115 2504 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjkhf\" (UniqueName: \"kubernetes.io/projected/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-kube-api-access-xjkhf\") pod \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " May 8 00:47:04.267765 kubelet[2504]: I0508 00:47:04.267129 2504 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-etc-cni-netd\") pod \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " May 8 00:47:04.267765 kubelet[2504]: I0508 00:47:04.267149 2504 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-cilium-cgroup\") pod \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " May 8 00:47:04.267765 kubelet[2504]: I0508 00:47:04.267162 2504 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-cni-path\") pod \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\" (UID: \"c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f\") " May 8 00:47:04.267765 kubelet[2504]: I0508 00:47:04.267213 2504 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 8 00:47:04.267962 kubelet[2504]: I0508 00:47:04.267223 2504 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 8 00:47:04.267962 kubelet[2504]: I0508 00:47:04.267246 2504 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-cni-path" (OuterVolumeSpecName: "cni-path") pod "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f" (UID: "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:47:04.267962 kubelet[2504]: I0508 00:47:04.267264 2504 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f" (UID: "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:47:04.267962 kubelet[2504]: I0508 00:47:04.267497 2504 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f" (UID: "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:47:04.269672 kubelet[2504]: I0508 00:47:04.269648 2504 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f" (UID: "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:47:04.270486 kubelet[2504]: I0508 00:47:04.270455 2504 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f" (UID: "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:47:04.270538 kubelet[2504]: I0508 00:47:04.270497 2504 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f" (UID: "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:47:04.271119 kubelet[2504]: I0508 00:47:04.270931 2504 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-hostproc" (OuterVolumeSpecName: "hostproc") pod "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f" (UID: "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:47:04.271119 kubelet[2504]: I0508 00:47:04.271053 2504 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f" (UID: "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:47:04.271119 kubelet[2504]: I0508 00:47:04.271080 2504 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f" (UID: "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:47:04.271503 kubelet[2504]: I0508 00:47:04.271471 2504 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1a57d54-ed0c-4214-a2db-7de2b0ea2ecf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d1a57d54-ed0c-4214-a2db-7de2b0ea2ecf" (UID: "d1a57d54-ed0c-4214-a2db-7de2b0ea2ecf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 8 00:47:04.272068 kubelet[2504]: I0508 00:47:04.272033 2504 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1a57d54-ed0c-4214-a2db-7de2b0ea2ecf-kube-api-access-t7txp" (OuterVolumeSpecName: "kube-api-access-t7txp") pod "d1a57d54-ed0c-4214-a2db-7de2b0ea2ecf" (UID: "d1a57d54-ed0c-4214-a2db-7de2b0ea2ecf"). InnerVolumeSpecName "kube-api-access-t7txp". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:47:04.272139 kubelet[2504]: I0508 00:47:04.272108 2504 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f" (UID: "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 8 00:47:04.272139 kubelet[2504]: I0508 00:47:04.272110 2504 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-kube-api-access-xjkhf" (OuterVolumeSpecName: "kube-api-access-xjkhf") pod "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f" (UID: "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f"). InnerVolumeSpecName "kube-api-access-xjkhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:47:04.274419 kubelet[2504]: I0508 00:47:04.274375 2504 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f" (UID: "c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 8 00:47:04.367695 kubelet[2504]: I0508 00:47:04.367630 2504 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t7txp\" (UniqueName: \"kubernetes.io/projected/d1a57d54-ed0c-4214-a2db-7de2b0ea2ecf-kube-api-access-t7txp\") on node \"localhost\" DevicePath \"\"" May 8 00:47:04.367695 kubelet[2504]: I0508 00:47:04.367674 2504 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d1a57d54-ed0c-4214-a2db-7de2b0ea2ecf-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:47:04.367695 kubelet[2504]: I0508 00:47:04.367689 2504 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 8 00:47:04.367695 kubelet[2504]: I0508 00:47:04.367699 2504 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-lib-modules\") on node \"localhost\" DevicePath \"\"" May 8 00:47:04.367695 kubelet[2504]: I0508 00:47:04.367710 2504 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 8 00:47:04.367695 kubelet[2504]: I0508 00:47:04.367719 2504 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-hostproc\") on node \"localhost\" DevicePath \"\"" May 8 00:47:04.368003 kubelet[2504]: I0508 00:47:04.367729 2504 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-cilium-run\") on node \"localhost\" DevicePath \"\"" May 8 00:47:04.368003 kubelet[2504]: I0508 00:47:04.367738 2504 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 8 00:47:04.368003 kubelet[2504]: I0508 00:47:04.367746 2504 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 8 00:47:04.368003 kubelet[2504]: I0508 00:47:04.367754 2504 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 8 00:47:04.368003 kubelet[2504]: I0508 00:47:04.367762 2504 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 8 00:47:04.368003 kubelet[2504]: I0508 00:47:04.367770 2504 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xjkhf\" (UniqueName: \"kubernetes.io/projected/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-kube-api-access-xjkhf\") on node \"localhost\" DevicePath \"\"" May 8 00:47:04.368003 kubelet[2504]: I0508 00:47:04.367779 2504 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-cni-path\") on node \"localhost\" DevicePath \"\"" May 8 00:47:04.368003 kubelet[2504]: I0508 00:47:04.367787 2504 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:47:04.691493 kubelet[2504]: I0508 00:47:04.691452 2504 scope.go:117] "RemoveContainer" containerID="7f43368783d59d821e70084cabd7244084576573ad3ce75439205ae3f581c6e9" May 8 00:47:04.692803 containerd[1463]: time="2025-05-08T00:47:04.692731676Z" level=info msg="RemoveContainer for \"7f43368783d59d821e70084cabd7244084576573ad3ce75439205ae3f581c6e9\"" May 8 00:47:04.697215 systemd[1]: Removed slice kubepods-besteffort-podd1a57d54_ed0c_4214_a2db_7de2b0ea2ecf.slice - libcontainer container kubepods-besteffort-podd1a57d54_ed0c_4214_a2db_7de2b0ea2ecf.slice. May 8 00:47:04.701591 systemd[1]: Removed slice kubepods-burstable-podc0ce0610_2ac9_45d0_9e0f_b4d60f4f3b8f.slice - libcontainer container kubepods-burstable-podc0ce0610_2ac9_45d0_9e0f_b4d60f4f3b8f.slice. May 8 00:47:04.701703 systemd[1]: kubepods-burstable-podc0ce0610_2ac9_45d0_9e0f_b4d60f4f3b8f.slice: Consumed 6.978s CPU time. May 8 00:47:04.756038 containerd[1463]: time="2025-05-08T00:47:04.755957692Z" level=info msg="RemoveContainer for \"7f43368783d59d821e70084cabd7244084576573ad3ce75439205ae3f581c6e9\" returns successfully" May 8 00:47:04.756547 kubelet[2504]: I0508 00:47:04.756501 2504 scope.go:117] "RemoveContainer" containerID="7f43368783d59d821e70084cabd7244084576573ad3ce75439205ae3f581c6e9" May 8 00:47:04.760935 containerd[1463]: time="2025-05-08T00:47:04.760870630Z" level=error msg="ContainerStatus for \"7f43368783d59d821e70084cabd7244084576573ad3ce75439205ae3f581c6e9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f43368783d59d821e70084cabd7244084576573ad3ce75439205ae3f581c6e9\": not found" May 8 00:47:04.769808 kubelet[2504]: E0508 00:47:04.769739 2504 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f43368783d59d821e70084cabd7244084576573ad3ce75439205ae3f581c6e9\": not found" containerID="7f43368783d59d821e70084cabd7244084576573ad3ce75439205ae3f581c6e9" May 8 00:47:04.769997 kubelet[2504]: I0508 00:47:04.769796 2504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f43368783d59d821e70084cabd7244084576573ad3ce75439205ae3f581c6e9"} err="failed to get container status \"7f43368783d59d821e70084cabd7244084576573ad3ce75439205ae3f581c6e9\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f43368783d59d821e70084cabd7244084576573ad3ce75439205ae3f581c6e9\": not found" May 8 00:47:04.769997 kubelet[2504]: I0508 00:47:04.769888 2504 scope.go:117] "RemoveContainer" containerID="1998d8658eb62ec054462fd9bd3afc7e7d1db1cd9c468e20c9de304228bbb6e5" May 8 00:47:04.771290 containerd[1463]: time="2025-05-08T00:47:04.771241436Z" level=info msg="RemoveContainer for \"1998d8658eb62ec054462fd9bd3afc7e7d1db1cd9c468e20c9de304228bbb6e5\"" May 8 00:47:04.875915 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7010103a22fea0b6557175f1c58454ec53a46fddbb2f99d6a2047a61dc01a200-rootfs.mount: Deactivated successfully. May 8 00:47:04.876034 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7010103a22fea0b6557175f1c58454ec53a46fddbb2f99d6a2047a61dc01a200-shm.mount: Deactivated successfully. May 8 00:47:04.876112 systemd[1]: var-lib-kubelet-pods-d1a57d54\x2ded0c\x2d4214\x2da2db\x2d7de2b0ea2ecf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt7txp.mount: Deactivated successfully. May 8 00:47:04.876199 systemd[1]: var-lib-kubelet-pods-c0ce0610\x2d2ac9\x2d45d0\x2d9e0f\x2db4d60f4f3b8f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxjkhf.mount: Deactivated successfully. May 8 00:47:04.876285 systemd[1]: var-lib-kubelet-pods-c0ce0610\x2d2ac9\x2d45d0\x2d9e0f\x2db4d60f4f3b8f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 00:47:04.876373 systemd[1]: var-lib-kubelet-pods-c0ce0610\x2d2ac9\x2d45d0\x2d9e0f\x2db4d60f4f3b8f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 00:47:04.930364 containerd[1463]: time="2025-05-08T00:47:04.930286509Z" level=info msg="RemoveContainer for \"1998d8658eb62ec054462fd9bd3afc7e7d1db1cd9c468e20c9de304228bbb6e5\" returns successfully" May 8 00:47:04.930889 kubelet[2504]: I0508 00:47:04.930647 2504 scope.go:117] "RemoveContainer" containerID="47510c8d060c6644cb48ebc5213659afbe2973a4b1119853c544faed3d9f5d20" May 8 00:47:04.932080 containerd[1463]: time="2025-05-08T00:47:04.932021899Z" level=info msg="RemoveContainer for \"47510c8d060c6644cb48ebc5213659afbe2973a4b1119853c544faed3d9f5d20\"" May 8 00:47:05.006268 containerd[1463]: time="2025-05-08T00:47:05.006114823Z" level=info msg="RemoveContainer for \"47510c8d060c6644cb48ebc5213659afbe2973a4b1119853c544faed3d9f5d20\" returns successfully" May 8 00:47:05.006590 kubelet[2504]: I0508 00:47:05.006541 2504 scope.go:117] "RemoveContainer" containerID="1151df86bb55c728cc62a560f4d34672ba20ca8c0b0d8131c9bfd8667c9968ed" May 8 00:47:05.007952 containerd[1463]: time="2025-05-08T00:47:05.007929031Z" level=info msg="RemoveContainer for \"1151df86bb55c728cc62a560f4d34672ba20ca8c0b0d8131c9bfd8667c9968ed\"" May 8 00:47:05.148932 containerd[1463]: time="2025-05-08T00:47:05.148874969Z" level=info msg="RemoveContainer for \"1151df86bb55c728cc62a560f4d34672ba20ca8c0b0d8131c9bfd8667c9968ed\" returns successfully" May 8 00:47:05.149187 kubelet[2504]: I0508 00:47:05.149150 2504 scope.go:117] "RemoveContainer" containerID="80ca1a137291e5616cd9b91ec5a2bbd6c46512fc7ea6ae9ecd1994b8055e1f06" May 8 00:47:05.150440 containerd[1463]: time="2025-05-08T00:47:05.150388504Z" level=info msg="RemoveContainer for \"80ca1a137291e5616cd9b91ec5a2bbd6c46512fc7ea6ae9ecd1994b8055e1f06\"" May 8 00:47:05.277200 containerd[1463]: time="2025-05-08T00:47:05.277054494Z" level=info msg="RemoveContainer for \"80ca1a137291e5616cd9b91ec5a2bbd6c46512fc7ea6ae9ecd1994b8055e1f06\" returns successfully" May 8 00:47:05.277382 kubelet[2504]: I0508 00:47:05.277348 2504 scope.go:117] "RemoveContainer" containerID="05b0e1e4f2c069056d6c8f9637d69943540bb7bd43b6b63b514215fd6b06ebc2" May 8 00:47:05.278805 containerd[1463]: time="2025-05-08T00:47:05.278762149Z" level=info msg="RemoveContainer for \"05b0e1e4f2c069056d6c8f9637d69943540bb7bd43b6b63b514215fd6b06ebc2\"" May 8 00:47:05.299377 kubelet[2504]: E0508 00:47:05.299314 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:05.301828 kubelet[2504]: I0508 00:47:05.301781 2504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f" path="/var/lib/kubelet/pods/c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f/volumes" May 8 00:47:05.302915 kubelet[2504]: I0508 00:47:05.302885 2504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1a57d54-ed0c-4214-a2db-7de2b0ea2ecf" path="/var/lib/kubelet/pods/d1a57d54-ed0c-4214-a2db-7de2b0ea2ecf/volumes" May 8 00:47:05.369766 containerd[1463]: time="2025-05-08T00:47:05.369704125Z" level=info msg="RemoveContainer for \"05b0e1e4f2c069056d6c8f9637d69943540bb7bd43b6b63b514215fd6b06ebc2\" returns successfully" May 8 00:47:05.370171 kubelet[2504]: I0508 00:47:05.370058 2504 scope.go:117] "RemoveContainer" containerID="1998d8658eb62ec054462fd9bd3afc7e7d1db1cd9c468e20c9de304228bbb6e5" May 8 00:47:05.370471 containerd[1463]: time="2025-05-08T00:47:05.370383279Z" level=error msg="ContainerStatus for \"1998d8658eb62ec054462fd9bd3afc7e7d1db1cd9c468e20c9de304228bbb6e5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1998d8658eb62ec054462fd9bd3afc7e7d1db1cd9c468e20c9de304228bbb6e5\": not found" May 8 00:47:05.370635 kubelet[2504]: E0508 00:47:05.370585 2504 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1998d8658eb62ec054462fd9bd3afc7e7d1db1cd9c468e20c9de304228bbb6e5\": not found" containerID="1998d8658eb62ec054462fd9bd3afc7e7d1db1cd9c468e20c9de304228bbb6e5" May 8 00:47:05.370635 kubelet[2504]: I0508 00:47:05.370627 2504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1998d8658eb62ec054462fd9bd3afc7e7d1db1cd9c468e20c9de304228bbb6e5"} err="failed to get container status \"1998d8658eb62ec054462fd9bd3afc7e7d1db1cd9c468e20c9de304228bbb6e5\": rpc error: code = NotFound desc = an error occurred when try to find container \"1998d8658eb62ec054462fd9bd3afc7e7d1db1cd9c468e20c9de304228bbb6e5\": not found" May 8 00:47:05.370635 kubelet[2504]: I0508 00:47:05.370650 2504 scope.go:117] "RemoveContainer" containerID="47510c8d060c6644cb48ebc5213659afbe2973a4b1119853c544faed3d9f5d20" May 8 00:47:05.370936 containerd[1463]: time="2025-05-08T00:47:05.370888582Z" level=error msg="ContainerStatus for \"47510c8d060c6644cb48ebc5213659afbe2973a4b1119853c544faed3d9f5d20\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47510c8d060c6644cb48ebc5213659afbe2973a4b1119853c544faed3d9f5d20\": not found" May 8 00:47:05.371051 kubelet[2504]: E0508 00:47:05.371020 2504 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"47510c8d060c6644cb48ebc5213659afbe2973a4b1119853c544faed3d9f5d20\": not found" containerID="47510c8d060c6644cb48ebc5213659afbe2973a4b1119853c544faed3d9f5d20" May 8 00:47:05.371097 kubelet[2504]: I0508 00:47:05.371056 2504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"47510c8d060c6644cb48ebc5213659afbe2973a4b1119853c544faed3d9f5d20"} err="failed to get container status \"47510c8d060c6644cb48ebc5213659afbe2973a4b1119853c544faed3d9f5d20\": rpc error: code = NotFound desc = an error occurred when try to find container \"47510c8d060c6644cb48ebc5213659afbe2973a4b1119853c544faed3d9f5d20\": not found" May 8 00:47:05.371097 kubelet[2504]: I0508 00:47:05.371074 2504 scope.go:117] "RemoveContainer" containerID="1151df86bb55c728cc62a560f4d34672ba20ca8c0b0d8131c9bfd8667c9968ed" May 8 00:47:05.371254 containerd[1463]: time="2025-05-08T00:47:05.371220034Z" level=error msg="ContainerStatus for \"1151df86bb55c728cc62a560f4d34672ba20ca8c0b0d8131c9bfd8667c9968ed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1151df86bb55c728cc62a560f4d34672ba20ca8c0b0d8131c9bfd8667c9968ed\": not found" May 8 00:47:05.371347 kubelet[2504]: E0508 00:47:05.371311 2504 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1151df86bb55c728cc62a560f4d34672ba20ca8c0b0d8131c9bfd8667c9968ed\": not found" containerID="1151df86bb55c728cc62a560f4d34672ba20ca8c0b0d8131c9bfd8667c9968ed" May 8 00:47:05.371347 kubelet[2504]: I0508 00:47:05.371331 2504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1151df86bb55c728cc62a560f4d34672ba20ca8c0b0d8131c9bfd8667c9968ed"} err="failed to get container status \"1151df86bb55c728cc62a560f4d34672ba20ca8c0b0d8131c9bfd8667c9968ed\": rpc error: code = NotFound desc = an error occurred when try to find container \"1151df86bb55c728cc62a560f4d34672ba20ca8c0b0d8131c9bfd8667c9968ed\": not found" May 8 00:47:05.371424 kubelet[2504]: I0508 00:47:05.371354 2504 scope.go:117] "RemoveContainer" containerID="80ca1a137291e5616cd9b91ec5a2bbd6c46512fc7ea6ae9ecd1994b8055e1f06" May 8 00:47:05.371536 containerd[1463]: time="2025-05-08T00:47:05.371507421Z" level=error msg="ContainerStatus for \"80ca1a137291e5616cd9b91ec5a2bbd6c46512fc7ea6ae9ecd1994b8055e1f06\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"80ca1a137291e5616cd9b91ec5a2bbd6c46512fc7ea6ae9ecd1994b8055e1f06\": not found" May 8 00:47:05.371624 kubelet[2504]: E0508 00:47:05.371605 2504 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"80ca1a137291e5616cd9b91ec5a2bbd6c46512fc7ea6ae9ecd1994b8055e1f06\": not found" containerID="80ca1a137291e5616cd9b91ec5a2bbd6c46512fc7ea6ae9ecd1994b8055e1f06" May 8 00:47:05.371667 kubelet[2504]: I0508 00:47:05.371624 2504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"80ca1a137291e5616cd9b91ec5a2bbd6c46512fc7ea6ae9ecd1994b8055e1f06"} err="failed to get container status \"80ca1a137291e5616cd9b91ec5a2bbd6c46512fc7ea6ae9ecd1994b8055e1f06\": rpc error: code = NotFound desc = an error occurred when try to find container \"80ca1a137291e5616cd9b91ec5a2bbd6c46512fc7ea6ae9ecd1994b8055e1f06\": not found" May 8 00:47:05.371667 kubelet[2504]: I0508 00:47:05.371636 2504 scope.go:117] "RemoveContainer" containerID="05b0e1e4f2c069056d6c8f9637d69943540bb7bd43b6b63b514215fd6b06ebc2" May 8 00:47:05.371795 containerd[1463]: time="2025-05-08T00:47:05.371768359Z" level=error msg="ContainerStatus for \"05b0e1e4f2c069056d6c8f9637d69943540bb7bd43b6b63b514215fd6b06ebc2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"05b0e1e4f2c069056d6c8f9637d69943540bb7bd43b6b63b514215fd6b06ebc2\": not found" May 8 00:47:05.371890 kubelet[2504]: E0508 00:47:05.371868 2504 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"05b0e1e4f2c069056d6c8f9637d69943540bb7bd43b6b63b514215fd6b06ebc2\": not found" containerID="05b0e1e4f2c069056d6c8f9637d69943540bb7bd43b6b63b514215fd6b06ebc2" May 8 00:47:05.371890 kubelet[2504]: I0508 00:47:05.371887 2504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"05b0e1e4f2c069056d6c8f9637d69943540bb7bd43b6b63b514215fd6b06ebc2"} err="failed to get container status \"05b0e1e4f2c069056d6c8f9637d69943540bb7bd43b6b63b514215fd6b06ebc2\": rpc error: code = NotFound desc = an error occurred when try to find container \"05b0e1e4f2c069056d6c8f9637d69943540bb7bd43b6b63b514215fd6b06ebc2\": not found" May 8 00:47:05.778202 sshd[4170]: pam_unix(sshd:session): session closed for user core May 8 00:47:05.787569 systemd[1]: sshd@24-10.0.0.130:22-10.0.0.1:56326.service: Deactivated successfully. May 8 00:47:05.790940 systemd[1]: session-25.scope: Deactivated successfully. May 8 00:47:05.792870 systemd-logind[1444]: Session 25 logged out. Waiting for processes to exit. May 8 00:47:05.799848 systemd[1]: Started sshd@25-10.0.0.130:22-10.0.0.1:56342.service - OpenSSH per-connection server daemon (10.0.0.1:56342). May 8 00:47:05.800940 systemd-logind[1444]: Removed session 25. May 8 00:47:05.863026 sshd[4330]: Accepted publickey for core from 10.0.0.1 port 56342 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:47:05.864650 sshd[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:47:05.868746 systemd-logind[1444]: New session 26 of user core. May 8 00:47:05.878537 systemd[1]: Started session-26.scope - Session 26 of User core. May 8 00:47:08.145505 sshd[4330]: pam_unix(sshd:session): session closed for user core May 8 00:47:08.156263 systemd[1]: sshd@25-10.0.0.130:22-10.0.0.1:56342.service: Deactivated successfully. May 8 00:47:08.158150 systemd[1]: session-26.scope: Deactivated successfully. May 8 00:47:08.159687 systemd-logind[1444]: Session 26 logged out. Waiting for processes to exit. May 8 00:47:08.168660 systemd[1]: Started sshd@26-10.0.0.130:22-10.0.0.1:43056.service - OpenSSH per-connection server daemon (10.0.0.1:43056). May 8 00:47:08.169544 systemd-logind[1444]: Removed session 26. May 8 00:47:08.201395 sshd[4343]: Accepted publickey for core from 10.0.0.1 port 43056 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:47:08.203181 sshd[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:47:08.207139 systemd-logind[1444]: New session 27 of user core. May 8 00:47:08.216537 systemd[1]: Started session-27.scope - Session 27 of User core. May 8 00:47:08.266900 sshd[4343]: pam_unix(sshd:session): session closed for user core May 8 00:47:08.278055 systemd[1]: sshd@26-10.0.0.130:22-10.0.0.1:43056.service: Deactivated successfully. May 8 00:47:08.279693 systemd[1]: session-27.scope: Deactivated successfully. May 8 00:47:08.280990 systemd-logind[1444]: Session 27 logged out. Waiting for processes to exit. May 8 00:47:08.289762 systemd[1]: Started sshd@27-10.0.0.130:22-10.0.0.1:43066.service - OpenSSH per-connection server daemon (10.0.0.1:43066). May 8 00:47:08.290606 systemd-logind[1444]: Removed session 27. May 8 00:47:08.323529 sshd[4351]: Accepted publickey for core from 10.0.0.1 port 43066 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:47:08.325125 sshd[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:47:08.329048 systemd-logind[1444]: New session 28 of user core. May 8 00:47:08.334514 systemd[1]: Started session-28.scope - Session 28 of User core. May 8 00:47:08.426652 kubelet[2504]: E0508 00:47:08.426513 2504 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:47:08.549113 kubelet[2504]: I0508 00:47:08.546818 2504 memory_manager.go:355] "RemoveStaleState removing state" podUID="d1a57d54-ed0c-4214-a2db-7de2b0ea2ecf" containerName="cilium-operator" May 8 00:47:08.549113 kubelet[2504]: I0508 00:47:08.546848 2504 memory_manager.go:355] "RemoveStaleState removing state" podUID="c0ce0610-2ac9-45d0-9e0f-b4d60f4f3b8f" containerName="cilium-agent" May 8 00:47:08.558814 systemd[1]: Created slice kubepods-burstable-pod6891069c_efc8_4635_a364_afe6b61605ef.slice - libcontainer container kubepods-burstable-pod6891069c_efc8_4635_a364_afe6b61605ef.slice. May 8 00:47:08.596007 kubelet[2504]: I0508 00:47:08.595963 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6891069c-efc8-4635-a364-afe6b61605ef-cilium-run\") pod \"cilium-6ml9d\" (UID: \"6891069c-efc8-4635-a364-afe6b61605ef\") " pod="kube-system/cilium-6ml9d" May 8 00:47:08.596007 kubelet[2504]: I0508 00:47:08.596009 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6891069c-efc8-4635-a364-afe6b61605ef-host-proc-sys-net\") pod \"cilium-6ml9d\" (UID: \"6891069c-efc8-4635-a364-afe6b61605ef\") " pod="kube-system/cilium-6ml9d" May 8 00:47:08.596162 kubelet[2504]: I0508 00:47:08.596034 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6891069c-efc8-4635-a364-afe6b61605ef-hubble-tls\") pod \"cilium-6ml9d\" (UID: \"6891069c-efc8-4635-a364-afe6b61605ef\") " pod="kube-system/cilium-6ml9d" May 8 00:47:08.596162 kubelet[2504]: I0508 00:47:08.596055 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6891069c-efc8-4635-a364-afe6b61605ef-lib-modules\") pod \"cilium-6ml9d\" (UID: \"6891069c-efc8-4635-a364-afe6b61605ef\") " pod="kube-system/cilium-6ml9d" May 8 00:47:08.596162 kubelet[2504]: I0508 00:47:08.596073 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6891069c-efc8-4635-a364-afe6b61605ef-etc-cni-netd\") pod \"cilium-6ml9d\" (UID: \"6891069c-efc8-4635-a364-afe6b61605ef\") " pod="kube-system/cilium-6ml9d" May 8 00:47:08.596162 kubelet[2504]: I0508 00:47:08.596092 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6891069c-efc8-4635-a364-afe6b61605ef-host-proc-sys-kernel\") pod \"cilium-6ml9d\" (UID: \"6891069c-efc8-4635-a364-afe6b61605ef\") " pod="kube-system/cilium-6ml9d" May 8 00:47:08.596162 kubelet[2504]: I0508 00:47:08.596136 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6891069c-efc8-4635-a364-afe6b61605ef-cilium-cgroup\") pod \"cilium-6ml9d\" (UID: \"6891069c-efc8-4635-a364-afe6b61605ef\") " pod="kube-system/cilium-6ml9d" May 8 00:47:08.596297 kubelet[2504]: I0508 00:47:08.596176 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6891069c-efc8-4635-a364-afe6b61605ef-bpf-maps\") pod \"cilium-6ml9d\" (UID: \"6891069c-efc8-4635-a364-afe6b61605ef\") " pod="kube-system/cilium-6ml9d" May 8 00:47:08.596297 kubelet[2504]: I0508 00:47:08.596192 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6891069c-efc8-4635-a364-afe6b61605ef-clustermesh-secrets\") pod \"cilium-6ml9d\" (UID: \"6891069c-efc8-4635-a364-afe6b61605ef\") " pod="kube-system/cilium-6ml9d" May 8 00:47:08.596297 kubelet[2504]: I0508 00:47:08.596206 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6891069c-efc8-4635-a364-afe6b61605ef-xtables-lock\") pod \"cilium-6ml9d\" (UID: \"6891069c-efc8-4635-a364-afe6b61605ef\") " pod="kube-system/cilium-6ml9d" May 8 00:47:08.596297 kubelet[2504]: I0508 00:47:08.596221 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5dps\" (UniqueName: \"kubernetes.io/projected/6891069c-efc8-4635-a364-afe6b61605ef-kube-api-access-h5dps\") pod \"cilium-6ml9d\" (UID: \"6891069c-efc8-4635-a364-afe6b61605ef\") " pod="kube-system/cilium-6ml9d" May 8 00:47:08.596297 kubelet[2504]: I0508 00:47:08.596238 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6891069c-efc8-4635-a364-afe6b61605ef-hostproc\") pod \"cilium-6ml9d\" (UID: \"6891069c-efc8-4635-a364-afe6b61605ef\") " pod="kube-system/cilium-6ml9d" May 8 00:47:08.596297 kubelet[2504]: I0508 00:47:08.596252 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6891069c-efc8-4635-a364-afe6b61605ef-cni-path\") pod \"cilium-6ml9d\" (UID: \"6891069c-efc8-4635-a364-afe6b61605ef\") " pod="kube-system/cilium-6ml9d" May 8 00:47:08.596445 kubelet[2504]: I0508 00:47:08.596266 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6891069c-efc8-4635-a364-afe6b61605ef-cilium-config-path\") pod \"cilium-6ml9d\" (UID: \"6891069c-efc8-4635-a364-afe6b61605ef\") " pod="kube-system/cilium-6ml9d" May 8 00:47:08.596445 kubelet[2504]: I0508 00:47:08.596286 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6891069c-efc8-4635-a364-afe6b61605ef-cilium-ipsec-secrets\") pod \"cilium-6ml9d\" (UID: \"6891069c-efc8-4635-a364-afe6b61605ef\") " pod="kube-system/cilium-6ml9d" May 8 00:47:08.861101 kubelet[2504]: E0508 00:47:08.861066 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:08.861678 containerd[1463]: time="2025-05-08T00:47:08.861637951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6ml9d,Uid:6891069c-efc8-4635-a364-afe6b61605ef,Namespace:kube-system,Attempt:0,}" May 8 00:47:09.018811 containerd[1463]: time="2025-05-08T00:47:09.018154210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:47:09.018811 containerd[1463]: time="2025-05-08T00:47:09.018790913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:47:09.018811 containerd[1463]: time="2025-05-08T00:47:09.018803777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:47:09.019011 containerd[1463]: time="2025-05-08T00:47:09.018896433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:47:09.037543 systemd[1]: Started cri-containerd-9d0c968e5a18d46f1a5b04489d17a50cebf2ba5ca8b85f77a7414be46088f8ef.scope - libcontainer container 9d0c968e5a18d46f1a5b04489d17a50cebf2ba5ca8b85f77a7414be46088f8ef. May 8 00:47:09.060927 containerd[1463]: time="2025-05-08T00:47:09.060868193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6ml9d,Uid:6891069c-efc8-4635-a364-afe6b61605ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d0c968e5a18d46f1a5b04489d17a50cebf2ba5ca8b85f77a7414be46088f8ef\"" May 8 00:47:09.061770 kubelet[2504]: E0508 00:47:09.061744 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:09.063796 containerd[1463]: time="2025-05-08T00:47:09.063745780Z" level=info msg="CreateContainer within sandbox \"9d0c968e5a18d46f1a5b04489d17a50cebf2ba5ca8b85f77a7414be46088f8ef\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:47:09.193507 containerd[1463]: time="2025-05-08T00:47:09.193448911Z" level=info msg="CreateContainer within sandbox \"9d0c968e5a18d46f1a5b04489d17a50cebf2ba5ca8b85f77a7414be46088f8ef\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c0007aec56195a2c076985cd29d98e75940caa6cf63055a9be06ea6ded614a99\"" May 8 00:47:09.194179 containerd[1463]: time="2025-05-08T00:47:09.194125989Z" level=info msg="StartContainer for \"c0007aec56195a2c076985cd29d98e75940caa6cf63055a9be06ea6ded614a99\"" May 8 00:47:09.222555 systemd[1]: Started cri-containerd-c0007aec56195a2c076985cd29d98e75940caa6cf63055a9be06ea6ded614a99.scope - libcontainer container c0007aec56195a2c076985cd29d98e75940caa6cf63055a9be06ea6ded614a99. May 8 00:47:09.270431 containerd[1463]: time="2025-05-08T00:47:09.270352964Z" level=info msg="StartContainer for \"c0007aec56195a2c076985cd29d98e75940caa6cf63055a9be06ea6ded614a99\" returns successfully" May 8 00:47:09.278618 systemd[1]: cri-containerd-c0007aec56195a2c076985cd29d98e75940caa6cf63055a9be06ea6ded614a99.scope: Deactivated successfully. May 8 00:47:09.340964 containerd[1463]: time="2025-05-08T00:47:09.340888507Z" level=info msg="shim disconnected" id=c0007aec56195a2c076985cd29d98e75940caa6cf63055a9be06ea6ded614a99 namespace=k8s.io May 8 00:47:09.340964 containerd[1463]: time="2025-05-08T00:47:09.340956436Z" level=warning msg="cleaning up after shim disconnected" id=c0007aec56195a2c076985cd29d98e75940caa6cf63055a9be06ea6ded614a99 namespace=k8s.io May 8 00:47:09.340964 containerd[1463]: time="2025-05-08T00:47:09.340968219Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:47:09.708812 kubelet[2504]: E0508 00:47:09.708785 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:09.710710 containerd[1463]: time="2025-05-08T00:47:09.710678731Z" level=info msg="CreateContainer within sandbox \"9d0c968e5a18d46f1a5b04489d17a50cebf2ba5ca8b85f77a7414be46088f8ef\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:47:10.177832 containerd[1463]: time="2025-05-08T00:47:10.177654188Z" level=info msg="CreateContainer within sandbox \"9d0c968e5a18d46f1a5b04489d17a50cebf2ba5ca8b85f77a7414be46088f8ef\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9481ec6f3f1590bdc26a855d4a698d18ca625330f0e09ab737cf40bd43dfc086\"" May 8 00:47:10.178852 containerd[1463]: time="2025-05-08T00:47:10.178790120Z" level=info msg="StartContainer for \"9481ec6f3f1590bdc26a855d4a698d18ca625330f0e09ab737cf40bd43dfc086\"" May 8 00:47:10.208576 systemd[1]: Started cri-containerd-9481ec6f3f1590bdc26a855d4a698d18ca625330f0e09ab737cf40bd43dfc086.scope - libcontainer container 9481ec6f3f1590bdc26a855d4a698d18ca625330f0e09ab737cf40bd43dfc086. May 8 00:47:10.242057 systemd[1]: cri-containerd-9481ec6f3f1590bdc26a855d4a698d18ca625330f0e09ab737cf40bd43dfc086.scope: Deactivated successfully. May 8 00:47:10.310323 containerd[1463]: time="2025-05-08T00:47:10.310255616Z" level=info msg="StartContainer for \"9481ec6f3f1590bdc26a855d4a698d18ca625330f0e09ab737cf40bd43dfc086\" returns successfully" May 8 00:47:10.450693 containerd[1463]: time="2025-05-08T00:47:10.450608159Z" level=info msg="shim disconnected" id=9481ec6f3f1590bdc26a855d4a698d18ca625330f0e09ab737cf40bd43dfc086 namespace=k8s.io May 8 00:47:10.450693 containerd[1463]: time="2025-05-08T00:47:10.450682571Z" level=warning msg="cleaning up after shim disconnected" id=9481ec6f3f1590bdc26a855d4a698d18ca625330f0e09ab737cf40bd43dfc086 namespace=k8s.io May 8 00:47:10.450693 containerd[1463]: time="2025-05-08T00:47:10.450694353Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:47:10.701932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9481ec6f3f1590bdc26a855d4a698d18ca625330f0e09ab737cf40bd43dfc086-rootfs.mount: Deactivated successfully. May 8 00:47:10.712284 kubelet[2504]: E0508 00:47:10.712257 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:10.714402 containerd[1463]: time="2025-05-08T00:47:10.714344975Z" level=info msg="CreateContainer within sandbox \"9d0c968e5a18d46f1a5b04489d17a50cebf2ba5ca8b85f77a7414be46088f8ef\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:47:10.782611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2231716006.mount: Deactivated successfully. May 8 00:47:10.818856 containerd[1463]: time="2025-05-08T00:47:10.818804281Z" level=info msg="CreateContainer within sandbox \"9d0c968e5a18d46f1a5b04489d17a50cebf2ba5ca8b85f77a7414be46088f8ef\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"21121b28bb70002f0a932a31156dd7a1c9ac4f43a89f1e1e7100f2172775f82e\"" May 8 00:47:10.820076 containerd[1463]: time="2025-05-08T00:47:10.820028641Z" level=info msg="StartContainer for \"21121b28bb70002f0a932a31156dd7a1c9ac4f43a89f1e1e7100f2172775f82e\"" May 8 00:47:10.854540 systemd[1]: Started cri-containerd-21121b28bb70002f0a932a31156dd7a1c9ac4f43a89f1e1e7100f2172775f82e.scope - libcontainer container 21121b28bb70002f0a932a31156dd7a1c9ac4f43a89f1e1e7100f2172775f82e. May 8 00:47:10.946333 systemd[1]: cri-containerd-21121b28bb70002f0a932a31156dd7a1c9ac4f43a89f1e1e7100f2172775f82e.scope: Deactivated successfully. May 8 00:47:10.950788 containerd[1463]: time="2025-05-08T00:47:10.950738870Z" level=info msg="StartContainer for \"21121b28bb70002f0a932a31156dd7a1c9ac4f43a89f1e1e7100f2172775f82e\" returns successfully" May 8 00:47:11.120767 containerd[1463]: time="2025-05-08T00:47:11.120586440Z" level=info msg="shim disconnected" id=21121b28bb70002f0a932a31156dd7a1c9ac4f43a89f1e1e7100f2172775f82e namespace=k8s.io May 8 00:47:11.120767 containerd[1463]: time="2025-05-08T00:47:11.120654300Z" level=warning msg="cleaning up after shim disconnected" id=21121b28bb70002f0a932a31156dd7a1c9ac4f43a89f1e1e7100f2172775f82e namespace=k8s.io May 8 00:47:11.120767 containerd[1463]: time="2025-05-08T00:47:11.120665881Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:47:11.298699 kubelet[2504]: E0508 00:47:11.298647 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:11.701972 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21121b28bb70002f0a932a31156dd7a1c9ac4f43a89f1e1e7100f2172775f82e-rootfs.mount: Deactivated successfully. May 8 00:47:11.715683 kubelet[2504]: E0508 00:47:11.715660 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:11.717186 containerd[1463]: time="2025-05-08T00:47:11.717152874Z" level=info msg="CreateContainer within sandbox \"9d0c968e5a18d46f1a5b04489d17a50cebf2ba5ca8b85f77a7414be46088f8ef\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:47:12.342770 containerd[1463]: time="2025-05-08T00:47:12.342686572Z" level=info msg="CreateContainer within sandbox \"9d0c968e5a18d46f1a5b04489d17a50cebf2ba5ca8b85f77a7414be46088f8ef\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fa289c4b500e9914c53a862ef4fbc853887d3249579a417cee126cbc6f331dec\"" May 8 00:47:12.343460 containerd[1463]: time="2025-05-08T00:47:12.343363188Z" level=info msg="StartContainer for \"fa289c4b500e9914c53a862ef4fbc853887d3249579a417cee126cbc6f331dec\"" May 8 00:47:12.372534 systemd[1]: Started cri-containerd-fa289c4b500e9914c53a862ef4fbc853887d3249579a417cee126cbc6f331dec.scope - libcontainer container fa289c4b500e9914c53a862ef4fbc853887d3249579a417cee126cbc6f331dec. May 8 00:47:12.397160 systemd[1]: cri-containerd-fa289c4b500e9914c53a862ef4fbc853887d3249579a417cee126cbc6f331dec.scope: Deactivated successfully. May 8 00:47:12.530329 containerd[1463]: time="2025-05-08T00:47:12.530271756Z" level=info msg="StartContainer for \"fa289c4b500e9914c53a862ef4fbc853887d3249579a417cee126cbc6f331dec\" returns successfully" May 8 00:47:12.702153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa289c4b500e9914c53a862ef4fbc853887d3249579a417cee126cbc6f331dec-rootfs.mount: Deactivated successfully. May 8 00:47:12.720013 kubelet[2504]: E0508 00:47:12.719991 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:13.140676 containerd[1463]: time="2025-05-08T00:47:13.140497359Z" level=info msg="shim disconnected" id=fa289c4b500e9914c53a862ef4fbc853887d3249579a417cee126cbc6f331dec namespace=k8s.io May 8 00:47:13.140676 containerd[1463]: time="2025-05-08T00:47:13.140567543Z" level=warning msg="cleaning up after shim disconnected" id=fa289c4b500e9914c53a862ef4fbc853887d3249579a417cee126cbc6f331dec namespace=k8s.io May 8 00:47:13.140676 containerd[1463]: time="2025-05-08T00:47:13.140579545Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:47:13.427871 kubelet[2504]: E0508 00:47:13.427743 2504 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:47:13.724336 kubelet[2504]: E0508 00:47:13.724308 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:13.725946 containerd[1463]: time="2025-05-08T00:47:13.725894411Z" level=info msg="CreateContainer within sandbox \"9d0c968e5a18d46f1a5b04489d17a50cebf2ba5ca8b85f77a7414be46088f8ef\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:47:13.906575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount219243403.mount: Deactivated successfully. May 8 00:47:13.956061 containerd[1463]: time="2025-05-08T00:47:13.956006168Z" level=info msg="CreateContainer within sandbox \"9d0c968e5a18d46f1a5b04489d17a50cebf2ba5ca8b85f77a7414be46088f8ef\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"897a9aff3feac128bc4df160b2d4708d1281cfe6e09c9f0f1b3f0c8a7d2bf4a6\"" May 8 00:47:13.956766 containerd[1463]: time="2025-05-08T00:47:13.956605337Z" level=info msg="StartContainer for \"897a9aff3feac128bc4df160b2d4708d1281cfe6e09c9f0f1b3f0c8a7d2bf4a6\"" May 8 00:47:13.991533 systemd[1]: Started cri-containerd-897a9aff3feac128bc4df160b2d4708d1281cfe6e09c9f0f1b3f0c8a7d2bf4a6.scope - libcontainer container 897a9aff3feac128bc4df160b2d4708d1281cfe6e09c9f0f1b3f0c8a7d2bf4a6. May 8 00:47:14.205291 containerd[1463]: time="2025-05-08T00:47:14.205239697Z" level=info msg="StartContainer for \"897a9aff3feac128bc4df160b2d4708d1281cfe6e09c9f0f1b3f0c8a7d2bf4a6\" returns successfully" May 8 00:47:14.299749 kubelet[2504]: E0508 00:47:14.299533 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:14.597449 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 8 00:47:14.729489 kubelet[2504]: E0508 00:47:14.729444 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:14.762857 kubelet[2504]: I0508 00:47:14.762785 2504 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-08T00:47:14Z","lastTransitionTime":"2025-05-08T00:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 8 00:47:14.913860 kubelet[2504]: I0508 00:47:14.913795 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6ml9d" podStartSLOduration=6.913775981 podStartE2EDuration="6.913775981s" podCreationTimestamp="2025-05-08 00:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:47:14.913183275 +0000 UTC m=+91.718539324" watchObservedRunningTime="2025-05-08 00:47:14.913775981 +0000 UTC m=+91.719132030" May 8 00:47:15.731440 kubelet[2504]: E0508 00:47:15.731365 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:17.724943 systemd[1]: run-containerd-runc-k8s.io-897a9aff3feac128bc4df160b2d4708d1281cfe6e09c9f0f1b3f0c8a7d2bf4a6-runc.pDyHA1.mount: Deactivated successfully. May 8 00:47:17.774185 systemd-networkd[1400]: lxc_health: Link UP May 8 00:47:17.788791 systemd-networkd[1400]: lxc_health: Gained carrier May 8 00:47:18.864186 kubelet[2504]: E0508 00:47:18.864017 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:19.053804 systemd-networkd[1400]: lxc_health: Gained IPv6LL May 8 00:47:19.740068 kubelet[2504]: E0508 00:47:19.740019 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:20.742230 kubelet[2504]: E0508 00:47:20.742196 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:22.069621 kubelet[2504]: E0508 00:47:22.069280 2504 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:60916->127.0.0.1:43393: write tcp 127.0.0.1:60916->127.0.0.1:43393: write: broken pipe May 8 00:47:26.276724 sshd[4351]: pam_unix(sshd:session): session closed for user core May 8 00:47:26.281286 systemd[1]: sshd@27-10.0.0.130:22-10.0.0.1:43066.service: Deactivated successfully. May 8 00:47:26.283531 systemd[1]: session-28.scope: Deactivated successfully. May 8 00:47:26.284223 systemd-logind[1444]: Session 28 logged out. Waiting for processes to exit. May 8 00:47:26.285247 systemd-logind[1444]: Removed session 28.