May 8 00:36:39.882946 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:54:21 -00 2025 May 8 00:36:39.882967 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:36:39.882978 kernel: BIOS-provided physical RAM map: May 8 00:36:39.882984 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 8 00:36:39.882990 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 8 00:36:39.882996 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 8 00:36:39.883004 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 8 00:36:39.883010 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 8 00:36:39.883016 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable May 8 00:36:39.883022 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 8 00:36:39.883030 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable May 8 00:36:39.883037 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved May 8 00:36:39.883043 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 May 8 00:36:39.883049 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved May 8 00:36:39.883057 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 8 00:36:39.883064 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 8 00:36:39.883075 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 8 00:36:39.883082 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 8 00:36:39.883089 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 8 00:36:39.883097 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 8 00:36:39.883105 kernel: NX (Execute Disable) protection: active May 8 00:36:39.883112 kernel: APIC: Static calls initialized May 8 00:36:39.883120 kernel: efi: EFI v2.7 by EDK II May 8 00:36:39.883127 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 May 8 00:36:39.883134 kernel: SMBIOS 2.8 present. May 8 00:36:39.883141 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 May 8 00:36:39.883147 kernel: Hypervisor detected: KVM May 8 00:36:39.883156 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 8 00:36:39.883163 kernel: kvm-clock: using sched offset of 4886239688 cycles May 8 00:36:39.883170 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 8 00:36:39.883177 kernel: tsc: Detected 2794.748 MHz processor May 8 00:36:39.883184 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:36:39.883191 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:36:39.883198 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 May 8 00:36:39.883205 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 8 00:36:39.883212 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:36:39.883220 kernel: Using GB pages for direct mapping May 8 00:36:39.883227 kernel: Secure boot disabled May 8 00:36:39.883234 kernel: ACPI: Early table checksum verification disabled May 8 00:36:39.883241 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 8 00:36:39.883252 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 8 00:36:39.883259 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:36:39.883266 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:36:39.883275 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 8 00:36:39.883283 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:36:39.883290 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:36:39.883297 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:36:39.883304 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:36:39.883311 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 8 00:36:39.883318 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 8 00:36:39.883328 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 8 00:36:39.883335 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 8 00:36:39.883342 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 8 00:36:39.883349 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 8 00:36:39.883356 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 8 00:36:39.883363 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 8 00:36:39.883370 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 8 00:36:39.883377 kernel: No NUMA configuration found May 8 00:36:39.883384 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] May 8 00:36:39.883394 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] May 8 00:36:39.883401 kernel: Zone ranges: May 8 00:36:39.883408 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:36:39.883415 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] May 8 00:36:39.883422 kernel: Normal empty May 8 00:36:39.883429 kernel: Movable zone start for each node May 8 00:36:39.883436 kernel: Early memory node ranges May 8 00:36:39.883443 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 8 00:36:39.883450 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 8 00:36:39.883457 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 8 00:36:39.883467 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] May 8 00:36:39.883474 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] May 8 00:36:39.883481 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] May 8 00:36:39.883488 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] May 8 00:36:39.883495 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:36:39.883502 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 8 00:36:39.883510 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 8 00:36:39.883517 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:36:39.883524 kernel: On node 0, zone DMA: 240 pages in unavailable ranges May 8 00:36:39.883533 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 8 00:36:39.883540 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges May 8 00:36:39.883547 kernel: ACPI: PM-Timer IO Port: 0x608 May 8 00:36:39.883554 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 8 00:36:39.883561 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 8 00:36:39.883568 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 8 00:36:39.883575 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 8 00:36:39.883583 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 8 00:36:39.883590 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 8 00:36:39.883599 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 8 00:36:39.883606 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:36:39.883613 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 8 00:36:39.883620 kernel: TSC deadline timer available May 8 00:36:39.883627 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 8 00:36:39.883634 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 8 00:36:39.883641 kernel: kvm-guest: KVM setup pv remote TLB flush May 8 00:36:39.883648 kernel: kvm-guest: setup PV sched yield May 8 00:36:39.883655 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 8 00:36:39.883662 kernel: Booting paravirtualized kernel on KVM May 8 00:36:39.883808 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:36:39.883822 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 8 00:36:39.883829 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 May 8 00:36:39.883836 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 May 8 00:36:39.883843 kernel: pcpu-alloc: [0] 0 1 2 3 May 8 00:36:39.883851 kernel: kvm-guest: PV spinlocks enabled May 8 00:36:39.883858 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 8 00:36:39.883866 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:36:39.883877 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:36:39.883884 kernel: random: crng init done May 8 00:36:39.883891 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:36:39.883899 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:36:39.883906 kernel: Fallback order for Node 0: 0 May 8 00:36:39.883913 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 May 8 00:36:39.883920 kernel: Policy zone: DMA32 May 8 00:36:39.883927 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:36:39.883934 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42856K init, 2336K bss, 166140K reserved, 0K cma-reserved) May 8 00:36:39.883944 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 00:36:39.883951 kernel: ftrace: allocating 37944 entries in 149 pages May 8 00:36:39.883958 kernel: ftrace: allocated 149 pages with 4 groups May 8 00:36:39.883965 kernel: Dynamic Preempt: voluntary May 8 00:36:39.883980 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:36:39.883990 kernel: rcu: RCU event tracing is enabled. May 8 00:36:39.883998 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 00:36:39.884006 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:36:39.884013 kernel: Rude variant of Tasks RCU enabled. May 8 00:36:39.884020 kernel: Tracing variant of Tasks RCU enabled. May 8 00:36:39.884028 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:36:39.884035 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 00:36:39.884045 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 8 00:36:39.884053 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 00:36:39.884060 kernel: Console: colour dummy device 80x25 May 8 00:36:39.884067 kernel: printk: console [ttyS0] enabled May 8 00:36:39.884075 kernel: ACPI: Core revision 20230628 May 8 00:36:39.884085 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 8 00:36:39.884092 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:36:39.884100 kernel: x2apic enabled May 8 00:36:39.884107 kernel: APIC: Switched APIC routing to: physical x2apic May 8 00:36:39.884114 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 8 00:36:39.884122 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 8 00:36:39.884129 kernel: kvm-guest: setup PV IPIs May 8 00:36:39.884137 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:36:39.884144 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 8 00:36:39.884154 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 8 00:36:39.884162 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 8 00:36:39.884169 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 8 00:36:39.884176 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 8 00:36:39.884184 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:36:39.884191 kernel: Spectre V2 : Mitigation: Retpolines May 8 00:36:39.884199 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:36:39.884206 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 8 00:36:39.884214 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 8 00:36:39.884224 kernel: RETBleed: Mitigation: untrained return thunk May 8 00:36:39.884231 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:36:39.884239 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 8 00:36:39.884246 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 8 00:36:39.884254 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 8 00:36:39.884262 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 8 00:36:39.884269 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:36:39.884277 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:36:39.884287 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:36:39.884294 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:36:39.884301 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 8 00:36:39.884309 kernel: Freeing SMP alternatives memory: 32K May 8 00:36:39.884316 kernel: pid_max: default: 32768 minimum: 301 May 8 00:36:39.884324 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:36:39.884331 kernel: landlock: Up and running. May 8 00:36:39.884338 kernel: SELinux: Initializing. May 8 00:36:39.884346 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:36:39.884356 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:36:39.884363 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 8 00:36:39.884371 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:36:39.884379 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:36:39.884386 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:36:39.884394 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 8 00:36:39.884401 kernel: ... version: 0 May 8 00:36:39.884408 kernel: ... bit width: 48 May 8 00:36:39.884416 kernel: ... generic registers: 6 May 8 00:36:39.884426 kernel: ... value mask: 0000ffffffffffff May 8 00:36:39.884433 kernel: ... max period: 00007fffffffffff May 8 00:36:39.884440 kernel: ... fixed-purpose events: 0 May 8 00:36:39.884448 kernel: ... event mask: 000000000000003f May 8 00:36:39.884455 kernel: signal: max sigframe size: 1776 May 8 00:36:39.884462 kernel: rcu: Hierarchical SRCU implementation. May 8 00:36:39.884470 kernel: rcu: Max phase no-delay instances is 400. May 8 00:36:39.884477 kernel: smp: Bringing up secondary CPUs ... May 8 00:36:39.884485 kernel: smpboot: x86: Booting SMP configuration: May 8 00:36:39.884494 kernel: .... node #0, CPUs: #1 #2 #3 May 8 00:36:39.884502 kernel: smp: Brought up 1 node, 4 CPUs May 8 00:36:39.884509 kernel: smpboot: Max logical packages: 1 May 8 00:36:39.884517 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 8 00:36:39.884524 kernel: devtmpfs: initialized May 8 00:36:39.884531 kernel: x86/mm: Memory block size: 128MB May 8 00:36:39.884539 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 8 00:36:39.884546 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 8 00:36:39.884554 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) May 8 00:36:39.884563 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 8 00:36:39.884571 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 8 00:36:39.884579 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:36:39.884586 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 00:36:39.884593 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:36:39.884601 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:36:39.884608 kernel: audit: initializing netlink subsys (disabled) May 8 00:36:39.884616 kernel: audit: type=2000 audit(1746664599.918:1): state=initialized audit_enabled=0 res=1 May 8 00:36:39.884623 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:36:39.884633 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:36:39.884640 kernel: cpuidle: using governor menu May 8 00:36:39.884647 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:36:39.884655 kernel: dca service started, version 1.12.1 May 8 00:36:39.884662 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 8 00:36:39.884680 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 8 00:36:39.884688 kernel: PCI: Using configuration type 1 for base access May 8 00:36:39.884695 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:36:39.884703 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:36:39.884713 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:36:39.884720 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:36:39.884728 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:36:39.884735 kernel: ACPI: Added _OSI(Module Device) May 8 00:36:39.884742 kernel: ACPI: Added _OSI(Processor Device) May 8 00:36:39.884750 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:36:39.884757 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:36:39.884765 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:36:39.884772 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 8 00:36:39.884782 kernel: ACPI: Interpreter enabled May 8 00:36:39.884789 kernel: ACPI: PM: (supports S0 S3 S5) May 8 00:36:39.884796 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:36:39.884804 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:36:39.884811 kernel: PCI: Using E820 reservations for host bridge windows May 8 00:36:39.884825 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 8 00:36:39.884833 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:36:39.885017 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:36:39.885148 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 8 00:36:39.885268 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 8 00:36:39.885278 kernel: PCI host bridge to bus 0000:00 May 8 00:36:39.885404 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:36:39.885517 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 8 00:36:39.885627 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:36:39.885757 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 8 00:36:39.885885 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 8 00:36:39.885998 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] May 8 00:36:39.886110 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:36:39.886251 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 8 00:36:39.886384 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 8 00:36:39.886506 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 8 00:36:39.886630 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 8 00:36:39.886774 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 8 00:36:39.886907 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 8 00:36:39.887030 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:36:39.887160 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 8 00:36:39.887281 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 8 00:36:39.887402 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 8 00:36:39.887530 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] May 8 00:36:39.887660 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 8 00:36:39.887802 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 8 00:36:39.887937 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 8 00:36:39.888058 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] May 8 00:36:39.888195 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 8 00:36:39.888322 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 8 00:36:39.888444 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 8 00:36:39.888565 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] May 8 00:36:39.888702 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 8 00:36:39.888841 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 8 00:36:39.888965 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 8 00:36:39.889094 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 8 00:36:39.889219 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 8 00:36:39.889339 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 8 00:36:39.889467 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 8 00:36:39.889587 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 8 00:36:39.889597 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 8 00:36:39.889605 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 8 00:36:39.889613 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:36:39.889621 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 8 00:36:39.889632 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 8 00:36:39.889640 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 8 00:36:39.889647 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 8 00:36:39.889655 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 8 00:36:39.889663 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 8 00:36:39.889684 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 8 00:36:39.889692 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 8 00:36:39.889699 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 8 00:36:39.889707 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 8 00:36:39.889717 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 8 00:36:39.889725 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 8 00:36:39.889733 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 8 00:36:39.889740 kernel: iommu: Default domain type: Translated May 8 00:36:39.889748 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:36:39.889755 kernel: efivars: Registered efivars operations May 8 00:36:39.889763 kernel: PCI: Using ACPI for IRQ routing May 8 00:36:39.889771 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:36:39.889778 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 8 00:36:39.889788 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] May 8 00:36:39.889796 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] May 8 00:36:39.889803 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] May 8 00:36:39.889936 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 8 00:36:39.890058 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 8 00:36:39.890178 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:36:39.890188 kernel: vgaarb: loaded May 8 00:36:39.890196 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 8 00:36:39.890203 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 8 00:36:39.890214 kernel: clocksource: Switched to clocksource kvm-clock May 8 00:36:39.890222 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:36:39.890230 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:36:39.890238 kernel: pnp: PnP ACPI init May 8 00:36:39.890367 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 8 00:36:39.890379 kernel: pnp: PnP ACPI: found 6 devices May 8 00:36:39.890387 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:36:39.890394 kernel: NET: Registered PF_INET protocol family May 8 00:36:39.890406 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:36:39.890414 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:36:39.890421 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:36:39.890429 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:36:39.890437 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 00:36:39.890445 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:36:39.890452 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:36:39.890460 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:36:39.890470 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:36:39.890478 kernel: NET: Registered PF_XDP protocol family May 8 00:36:39.890600 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 8 00:36:39.890736 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 8 00:36:39.890860 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 8 00:36:39.890972 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 8 00:36:39.891083 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 8 00:36:39.891193 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 8 00:36:39.891308 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 8 00:36:39.891419 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] May 8 00:36:39.891429 kernel: PCI: CLS 0 bytes, default 64 May 8 00:36:39.891437 kernel: Initialise system trusted keyrings May 8 00:36:39.891444 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:36:39.891452 kernel: Key type asymmetric registered May 8 00:36:39.891460 kernel: Asymmetric key parser 'x509' registered May 8 00:36:39.891467 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 00:36:39.891475 kernel: io scheduler mq-deadline registered May 8 00:36:39.891485 kernel: io scheduler kyber registered May 8 00:36:39.891493 kernel: io scheduler bfq registered May 8 00:36:39.891500 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:36:39.891508 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 8 00:36:39.891516 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 8 00:36:39.891524 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 8 00:36:39.891532 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:36:39.891539 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:36:39.891547 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 8 00:36:39.891557 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:36:39.891565 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:36:39.891754 kernel: rtc_cmos 00:04: RTC can wake from S4 May 8 00:36:39.891881 kernel: rtc_cmos 00:04: registered as rtc0 May 8 00:36:39.891892 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 May 8 00:36:39.892002 kernel: rtc_cmos 00:04: setting system clock to 2025-05-08T00:36:39 UTC (1746664599) May 8 00:36:39.892118 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 8 00:36:39.892128 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 8 00:36:39.892140 kernel: efifb: probing for efifb May 8 00:36:39.892148 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k May 8 00:36:39.892156 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 May 8 00:36:39.892163 kernel: efifb: scrolling: redraw May 8 00:36:39.892171 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 May 8 00:36:39.892179 kernel: Console: switching to colour frame buffer device 100x37 May 8 00:36:39.892205 kernel: fb0: EFI VGA frame buffer device May 8 00:36:39.892215 kernel: pstore: Using crash dump compression: deflate May 8 00:36:39.892223 kernel: pstore: Registered efi_pstore as persistent store backend May 8 00:36:39.892233 kernel: NET: Registered PF_INET6 protocol family May 8 00:36:39.892241 kernel: Segment Routing with IPv6 May 8 00:36:39.892248 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:36:39.892256 kernel: NET: Registered PF_PACKET protocol family May 8 00:36:39.892264 kernel: Key type dns_resolver registered May 8 00:36:39.892271 kernel: IPI shorthand broadcast: enabled May 8 00:36:39.892280 kernel: sched_clock: Marking stable (752003277, 116342179)->(885720328, -17374872) May 8 00:36:39.892288 kernel: registered taskstats version 1 May 8 00:36:39.892296 kernel: Loading compiled-in X.509 certificates May 8 00:36:39.892306 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 75e4e434c57439d3f2eaf7797bbbcdd698dafd0e' May 8 00:36:39.892314 kernel: Key type .fscrypt registered May 8 00:36:39.892324 kernel: Key type fscrypt-provisioning registered May 8 00:36:39.892332 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:36:39.892339 kernel: ima: Allocated hash algorithm: sha1 May 8 00:36:39.892347 kernel: ima: No architecture policies found May 8 00:36:39.892355 kernel: clk: Disabling unused clocks May 8 00:36:39.892363 kernel: Freeing unused kernel image (initmem) memory: 42856K May 8 00:36:39.892371 kernel: Write protecting the kernel read-only data: 36864k May 8 00:36:39.892381 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 8 00:36:39.892389 kernel: Run /init as init process May 8 00:36:39.892397 kernel: with arguments: May 8 00:36:39.892405 kernel: /init May 8 00:36:39.892413 kernel: with environment: May 8 00:36:39.892420 kernel: HOME=/ May 8 00:36:39.892428 kernel: TERM=linux May 8 00:36:39.892436 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:36:39.892446 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:36:39.892459 systemd[1]: Detected virtualization kvm. May 8 00:36:39.892467 systemd[1]: Detected architecture x86-64. May 8 00:36:39.892476 systemd[1]: Running in initrd. May 8 00:36:39.892486 systemd[1]: No hostname configured, using default hostname. May 8 00:36:39.892497 systemd[1]: Hostname set to . May 8 00:36:39.892506 systemd[1]: Initializing machine ID from VM UUID. May 8 00:36:39.892514 systemd[1]: Queued start job for default target initrd.target. May 8 00:36:39.892522 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:36:39.892531 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:36:39.892540 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:36:39.892549 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:36:39.892557 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:36:39.892568 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:36:39.892578 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:36:39.892587 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:36:39.892596 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:36:39.892604 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:36:39.892613 systemd[1]: Reached target paths.target - Path Units. May 8 00:36:39.892621 systemd[1]: Reached target slices.target - Slice Units. May 8 00:36:39.892632 systemd[1]: Reached target swap.target - Swaps. May 8 00:36:39.892640 systemd[1]: Reached target timers.target - Timer Units. May 8 00:36:39.892648 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:36:39.892657 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:36:39.892665 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:36:39.892687 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 00:36:39.892695 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:36:39.892704 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:36:39.892715 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:36:39.892723 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:36:39.892732 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:36:39.892740 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:36:39.892748 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:36:39.892757 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:36:39.892765 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:36:39.892774 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:36:39.892782 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:36:39.892793 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:36:39.892801 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:36:39.892810 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:36:39.892826 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:36:39.892858 systemd-journald[192]: Collecting audit messages is disabled. May 8 00:36:39.892878 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:36:39.892887 systemd-journald[192]: Journal started May 8 00:36:39.892908 systemd-journald[192]: Runtime Journal (/run/log/journal/26060271184d4dcda3a45fc77ec6613c) is 6.0M, max 48.3M, 42.2M free. May 8 00:36:39.895296 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:36:39.897789 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:36:39.898553 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:36:39.900974 systemd-modules-load[193]: Inserted module 'overlay' May 8 00:36:39.903107 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:36:39.905900 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:36:39.907761 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:36:39.920563 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:36:39.924653 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:36:39.929897 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:36:39.942169 dracut-cmdline[221]: dracut-dracut-053 May 8 00:36:39.943246 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:36:39.945150 systemd-modules-load[193]: Inserted module 'br_netfilter' May 8 00:36:39.946095 kernel: Bridge firewalling registered May 8 00:36:39.946115 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:36:39.952191 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:36:39.962803 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:36:39.973600 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:36:39.983070 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:36:40.014417 systemd-resolved[268]: Positive Trust Anchors: May 8 00:36:40.014437 systemd-resolved[268]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:36:40.014468 systemd-resolved[268]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:36:40.020565 systemd-resolved[268]: Defaulting to hostname 'linux'. May 8 00:36:40.021919 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:36:40.024638 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:36:40.040701 kernel: SCSI subsystem initialized May 8 00:36:40.050699 kernel: Loading iSCSI transport class v2.0-870. May 8 00:36:40.060698 kernel: iscsi: registered transport (tcp) May 8 00:36:40.085708 kernel: iscsi: registered transport (qla4xxx) May 8 00:36:40.085739 kernel: QLogic iSCSI HBA Driver May 8 00:36:40.139595 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:36:40.150793 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:36:40.176979 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:36:40.177033 kernel: device-mapper: uevent: version 1.0.3 May 8 00:36:40.178083 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:36:40.221706 kernel: raid6: avx2x4 gen() 29808 MB/s May 8 00:36:40.238709 kernel: raid6: avx2x2 gen() 29932 MB/s May 8 00:36:40.255799 kernel: raid6: avx2x1 gen() 25598 MB/s May 8 00:36:40.255831 kernel: raid6: using algorithm avx2x2 gen() 29932 MB/s May 8 00:36:40.273823 kernel: raid6: .... xor() 19573 MB/s, rmw enabled May 8 00:36:40.273855 kernel: raid6: using avx2x2 recovery algorithm May 8 00:36:40.294695 kernel: xor: automatically using best checksumming function avx May 8 00:36:40.448704 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:36:40.460754 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:36:40.474813 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:36:40.487968 systemd-udevd[412]: Using default interface naming scheme 'v255'. May 8 00:36:40.492638 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:36:40.503820 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:36:40.516757 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation May 8 00:36:40.549565 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:36:40.565823 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:36:40.634907 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:36:40.645268 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:36:40.658613 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:36:40.662857 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:36:40.666088 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:36:40.668725 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:36:40.681701 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 8 00:36:40.719499 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:36:40.719521 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 00:36:40.719681 kernel: libata version 3.00 loaded. May 8 00:36:40.719699 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:36:40.719722 kernel: GPT:9289727 != 19775487 May 8 00:36:40.719733 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:36:40.719744 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:36:40.719754 kernel: GPT:9289727 != 19775487 May 8 00:36:40.719763 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:36:40.719773 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:36:40.719783 kernel: AES CTR mode by8 optimization enabled May 8 00:36:40.682264 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:36:40.694698 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:36:40.717546 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:36:40.717716 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:36:40.720928 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:36:40.728987 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:36:40.730308 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:36:40.739891 kernel: ahci 0000:00:1f.2: version 3.0 May 8 00:36:40.758349 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 8 00:36:40.758366 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 8 00:36:40.758530 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (456) May 8 00:36:40.758542 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 8 00:36:40.758710 kernel: BTRFS: device fsid 28014d97-e6d7-4db4-b1d9-76a980e09972 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (472) May 8 00:36:40.758722 kernel: scsi host0: ahci May 8 00:36:40.758890 kernel: scsi host1: ahci May 8 00:36:40.759048 kernel: scsi host2: ahci May 8 00:36:40.759186 kernel: scsi host3: ahci May 8 00:36:40.759324 kernel: scsi host4: ahci May 8 00:36:40.759465 kernel: scsi host5: ahci May 8 00:36:40.759605 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 8 00:36:40.759616 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 8 00:36:40.759630 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 8 00:36:40.759641 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 8 00:36:40.759651 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 8 00:36:40.759661 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 8 00:36:40.731915 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:36:40.749061 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:36:40.776205 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 8 00:36:40.783110 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 8 00:36:40.788951 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:36:40.793918 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 8 00:36:40.795158 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 8 00:36:40.809802 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:36:40.812136 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:36:40.812191 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:36:40.813500 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:36:40.816411 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:36:40.833881 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:36:40.845894 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:36:40.862612 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:36:40.938618 disk-uuid[553]: Primary Header is updated. May 8 00:36:40.938618 disk-uuid[553]: Secondary Entries is updated. May 8 00:36:40.938618 disk-uuid[553]: Secondary Header is updated. May 8 00:36:40.942032 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:36:40.946689 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:36:41.065699 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 8 00:36:41.065757 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 8 00:36:41.072711 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 8 00:36:41.072816 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 8 00:36:41.072847 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 8 00:36:41.073708 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 8 00:36:41.074714 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 8 00:36:41.075704 kernel: ata3.00: applying bridge limits May 8 00:36:41.075725 kernel: ata3.00: configured for UDMA/100 May 8 00:36:41.076707 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 8 00:36:41.138721 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 8 00:36:41.160412 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 8 00:36:41.160441 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 8 00:36:41.967852 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:36:41.967902 disk-uuid[568]: The operation has completed successfully. May 8 00:36:41.998190 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:36:41.998311 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:36:42.072915 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:36:42.076284 sh[595]: Success May 8 00:36:42.103701 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 8 00:36:42.135747 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:36:42.147111 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:36:42.150566 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:36:42.165126 kernel: BTRFS info (device dm-0): first mount of filesystem 28014d97-e6d7-4db4-b1d9-76a980e09972 May 8 00:36:42.165164 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 00:36:42.165176 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:36:42.166216 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:36:42.167043 kernel: BTRFS info (device dm-0): using free space tree May 8 00:36:42.172148 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:36:42.174848 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 00:36:42.184954 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:36:42.187951 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:36:42.198559 kernel: BTRFS info (device vda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:36:42.198589 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:36:42.198600 kernel: BTRFS info (device vda6): using free space tree May 8 00:36:42.202699 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:36:42.211403 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:36:42.212932 kernel: BTRFS info (device vda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:36:42.292236 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:36:42.323994 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:36:42.334975 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:36:42.342959 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:36:42.350325 systemd-networkd[773]: lo: Link UP May 8 00:36:42.350337 systemd-networkd[773]: lo: Gained carrier May 8 00:36:42.351985 systemd-networkd[773]: Enumeration completed May 8 00:36:42.352084 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:36:42.352377 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:36:42.352381 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:36:42.353970 systemd-networkd[773]: eth0: Link UP May 8 00:36:42.353975 systemd-networkd[773]: eth0: Gained carrier May 8 00:36:42.353983 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:36:42.354808 systemd[1]: Reached target network.target - Network. May 8 00:36:42.416193 systemd-networkd[773]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:36:42.468248 ignition[777]: Ignition 2.19.0 May 8 00:36:42.468260 ignition[777]: Stage: fetch-offline May 8 00:36:42.468308 ignition[777]: no configs at "/usr/lib/ignition/base.d" May 8 00:36:42.468319 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:36:42.468429 ignition[777]: parsed url from cmdline: "" May 8 00:36:42.468434 ignition[777]: no config URL provided May 8 00:36:42.468441 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:36:42.468454 ignition[777]: no config at "/usr/lib/ignition/user.ign" May 8 00:36:42.468494 ignition[777]: op(1): [started] loading QEMU firmware config module May 8 00:36:42.468505 ignition[777]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 00:36:42.481460 ignition[777]: op(1): [finished] loading QEMU firmware config module May 8 00:36:42.521246 ignition[777]: parsing config with SHA512: 3487406669c2849d328e222defc97c5a4bd17d0475b2b9ff9d92afec490061f29316488704dcf775f43f3e2e97210a56af65412a5202f972766d5c0114cda553 May 8 00:36:42.526347 unknown[777]: fetched base config from "system" May 8 00:36:42.526649 unknown[777]: fetched user config from "qemu" May 8 00:36:42.527118 ignition[777]: fetch-offline: fetch-offline passed May 8 00:36:42.527196 ignition[777]: Ignition finished successfully May 8 00:36:42.532517 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:36:42.532776 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:36:42.538948 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:36:42.553175 ignition[789]: Ignition 2.19.0 May 8 00:36:42.553186 ignition[789]: Stage: kargs May 8 00:36:42.553372 ignition[789]: no configs at "/usr/lib/ignition/base.d" May 8 00:36:42.553384 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:36:42.554298 ignition[789]: kargs: kargs passed May 8 00:36:42.554346 ignition[789]: Ignition finished successfully May 8 00:36:42.561408 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:36:42.570933 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:36:42.586586 ignition[797]: Ignition 2.19.0 May 8 00:36:42.586598 ignition[797]: Stage: disks May 8 00:36:42.586812 ignition[797]: no configs at "/usr/lib/ignition/base.d" May 8 00:36:42.586824 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:36:42.587634 ignition[797]: disks: disks passed May 8 00:36:42.590260 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:36:42.587696 ignition[797]: Ignition finished successfully May 8 00:36:42.592213 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:36:42.594107 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:36:42.594175 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:36:42.594526 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:36:42.594881 systemd[1]: Reached target basic.target - Basic System. May 8 00:36:42.608850 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:36:42.622311 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 00:36:42.732977 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:36:42.766776 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:36:42.870775 kernel: EXT4-fs (vda9): mounted filesystem 36960c89-ba45-4808-a41c-bf61ce9470a3 r/w with ordered data mode. Quota mode: none. May 8 00:36:42.871372 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:36:42.872019 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:36:42.879754 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:36:42.881428 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:36:42.882838 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:36:42.889331 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (816) May 8 00:36:42.889356 kernel: BTRFS info (device vda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:36:42.889371 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:36:42.882892 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:36:42.895049 kernel: BTRFS info (device vda6): using free space tree May 8 00:36:42.895067 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:36:42.882919 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:36:42.890861 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:36:42.896117 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:36:42.912889 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:36:42.943618 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:36:42.948519 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory May 8 00:36:42.953260 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:36:42.957488 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:36:43.039167 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:36:43.046772 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:36:43.071580 kernel: BTRFS info (device vda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:36:43.047468 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:36:43.098497 ignition[931]: INFO : Ignition 2.19.0 May 8 00:36:43.098497 ignition[931]: INFO : Stage: mount May 8 00:36:43.101102 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:36:43.101102 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:36:43.101102 ignition[931]: INFO : mount: mount passed May 8 00:36:43.101102 ignition[931]: INFO : Ignition finished successfully May 8 00:36:43.101530 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:36:43.109795 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:36:43.113502 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:36:43.164175 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:36:43.173983 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:36:43.181687 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (944) May 8 00:36:43.181722 kernel: BTRFS info (device vda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:36:43.181747 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:36:43.183290 kernel: BTRFS info (device vda6): using free space tree May 8 00:36:43.187772 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:36:43.189220 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:36:43.218052 ignition[961]: INFO : Ignition 2.19.0 May 8 00:36:43.219264 ignition[961]: INFO : Stage: files May 8 00:36:43.220045 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:36:43.220045 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:36:43.251540 ignition[961]: DEBUG : files: compiled without relabeling support, skipping May 8 00:36:43.251540 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:36:43.251540 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:36:43.255822 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:36:43.255822 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:36:43.255822 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:36:43.255822 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:36:43.255822 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 8 00:36:43.252696 unknown[961]: wrote ssh authorized keys file for user: core May 8 00:36:43.338992 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:36:43.464059 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 00:36:43.464059 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:36:43.468124 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 8 00:36:43.959331 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 00:36:44.062501 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:36:44.064641 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 8 00:36:44.064641 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:36:44.064641 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:36:44.064641 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:36:44.064641 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:36:44.064641 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:36:44.064641 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:36:44.064641 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:36:44.064641 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:36:44.064641 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:36:44.064641 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:36:44.064641 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:36:44.064641 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:36:44.064641 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 8 00:36:44.224917 systemd-networkd[773]: eth0: Gained IPv6LL May 8 00:36:44.494396 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 8 00:36:44.822124 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 00:36:44.822124 ignition[961]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 8 00:36:44.825830 ignition[961]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:36:44.828033 ignition[961]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:36:44.829863 ignition[961]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 8 00:36:44.829863 ignition[961]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 8 00:36:44.832308 ignition[961]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:36:44.834228 ignition[961]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:36:44.834228 ignition[961]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 8 00:36:44.837325 ignition[961]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:36:44.860202 ignition[961]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:36:44.878423 ignition[961]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:36:44.880085 ignition[961]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:36:44.880085 ignition[961]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 8 00:36:44.880085 ignition[961]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:36:44.880085 ignition[961]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:36:44.880085 ignition[961]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:36:44.880085 ignition[961]: INFO : files: files passed May 8 00:36:44.880085 ignition[961]: INFO : Ignition finished successfully May 8 00:36:44.893553 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:36:44.901921 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:36:44.903902 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:36:44.906462 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:36:44.906576 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:36:44.914842 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory May 8 00:36:44.921011 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:36:44.921011 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:36:44.920618 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:36:44.945869 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:36:44.922877 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:36:44.925654 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:36:44.968140 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:36:44.968262 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:36:44.974973 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:36:44.977155 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:36:44.979262 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:36:44.980001 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:36:44.996955 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:36:45.014883 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:36:45.033975 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:36:45.035292 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:36:45.037637 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:36:45.039758 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:36:45.039919 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:36:45.044172 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:36:45.045788 systemd[1]: Stopped target basic.target - Basic System. May 8 00:36:45.047867 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:36:45.049949 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:36:45.074766 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:36:45.076922 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:36:45.079097 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:36:45.081410 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:36:45.083544 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:36:45.104237 systemd[1]: Stopped target swap.target - Swaps. May 8 00:36:45.106048 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:36:45.106187 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:36:45.108578 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:36:45.110061 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:36:45.112226 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:36:45.112352 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:36:45.114513 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:36:45.114619 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:36:45.116888 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:36:45.116996 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:36:45.119041 systemd[1]: Stopped target paths.target - Path Units. May 8 00:36:45.124605 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:36:45.125771 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:36:45.127325 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:36:45.129202 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:36:45.131202 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:36:45.131302 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:36:45.133239 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:36:45.133330 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:36:45.152821 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:36:45.152942 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:36:45.154948 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:36:45.155057 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:36:45.174887 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:36:45.184157 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:36:45.184290 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:36:45.187525 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:36:45.190524 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:36:45.190759 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:36:45.194930 ignition[1015]: INFO : Ignition 2.19.0 May 8 00:36:45.194930 ignition[1015]: INFO : Stage: umount May 8 00:36:45.194930 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:36:45.194930 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:36:45.194930 ignition[1015]: INFO : umount: umount passed May 8 00:36:45.194930 ignition[1015]: INFO : Ignition finished successfully May 8 00:36:45.195054 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:36:45.195191 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:36:45.205211 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:36:45.206234 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:36:45.211070 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:36:45.211245 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:36:45.214831 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:36:45.215865 systemd[1]: Stopped target network.target - Network. May 8 00:36:45.217776 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:36:45.217866 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:36:45.219871 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:36:45.219934 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:36:45.221933 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:36:45.221994 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:36:45.223985 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:36:45.224045 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:36:45.226486 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:36:45.228376 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:36:45.232735 systemd-networkd[773]: eth0: DHCPv6 lease lost May 8 00:36:45.235627 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:36:45.235892 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:36:45.238409 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:36:45.238557 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:36:45.242288 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:36:45.242379 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:36:45.250941 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:36:45.251072 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:36:45.251152 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:36:45.251561 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:36:45.251608 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:36:45.252471 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:36:45.252525 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:36:45.252990 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:36:45.253038 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:36:45.253707 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:36:45.266433 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:36:45.266611 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:36:45.275727 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:36:45.275982 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:36:45.278661 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:36:45.278759 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:36:45.281150 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:36:45.281212 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:36:45.283698 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:36:45.283768 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:36:45.286267 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:36:45.286320 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:36:45.288342 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:36:45.288402 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:36:45.304920 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:36:45.307265 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:36:45.307350 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:36:45.309657 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:36:45.309741 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:36:45.315471 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:36:45.315645 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:36:45.478253 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:36:45.478433 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:36:45.480271 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:36:45.481281 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:36:45.481375 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:36:45.488976 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:36:45.496903 systemd[1]: Switching root. May 8 00:36:45.533228 systemd-journald[192]: Journal stopped May 8 00:36:46.835993 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). May 8 00:36:46.836066 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:36:46.836080 kernel: SELinux: policy capability open_perms=1 May 8 00:36:46.836092 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:36:46.836103 kernel: SELinux: policy capability always_check_network=0 May 8 00:36:46.836120 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:36:46.836134 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:36:46.836147 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:36:46.836163 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:36:46.836178 kernel: audit: type=1403 audit(1746664606.062:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:36:46.836191 systemd[1]: Successfully loaded SELinux policy in 42.307ms. May 8 00:36:46.836218 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.928ms. May 8 00:36:46.836231 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:36:46.836251 systemd[1]: Detected virtualization kvm. May 8 00:36:46.836275 systemd[1]: Detected architecture x86-64. May 8 00:36:46.836296 systemd[1]: Detected first boot. May 8 00:36:46.836320 systemd[1]: Initializing machine ID from VM UUID. May 8 00:36:46.838426 zram_generator::config[1059]: No configuration found. May 8 00:36:46.838444 systemd[1]: Populated /etc with preset unit settings. May 8 00:36:46.838456 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:36:46.838468 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:36:46.838481 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:36:46.838500 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:36:46.838518 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:36:46.838530 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:36:46.838545 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:36:46.838557 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:36:46.838569 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:36:46.838581 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:36:46.838594 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:36:46.838606 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:36:46.838618 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:36:46.838630 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:36:46.838642 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:36:46.838665 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:36:46.838692 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:36:46.838704 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 8 00:36:46.838716 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:36:46.838728 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:36:46.838740 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:36:46.838752 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:36:46.838764 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:36:46.838779 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:36:46.838790 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:36:46.838802 systemd[1]: Reached target slices.target - Slice Units. May 8 00:36:46.838814 systemd[1]: Reached target swap.target - Swaps. May 8 00:36:46.838825 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:36:46.838837 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:36:46.838849 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:36:46.838860 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:36:46.838874 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:36:46.838888 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:36:46.838900 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:36:46.838912 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:36:46.838923 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:36:46.838936 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:36:46.838951 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:36:46.838967 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:36:46.838981 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:36:46.838994 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:36:46.839009 systemd[1]: Reached target machines.target - Containers. May 8 00:36:46.839021 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:36:46.839032 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:36:46.839044 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:36:46.839056 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:36:46.839068 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:36:46.839079 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:36:46.839091 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:36:46.839105 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:36:46.839117 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:36:46.839130 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:36:46.839143 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:36:46.839171 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:36:46.839186 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:36:46.839199 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:36:46.839211 kernel: fuse: init (API version 7.39) May 8 00:36:46.839224 kernel: loop: module loaded May 8 00:36:46.839241 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:36:46.839253 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:36:46.839266 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:36:46.839278 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:36:46.839312 systemd-journald[1133]: Collecting audit messages is disabled. May 8 00:36:46.839334 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:36:46.839346 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:36:46.839357 kernel: ACPI: bus type drm_connector registered May 8 00:36:46.839373 systemd[1]: Stopped verity-setup.service. May 8 00:36:46.839385 systemd-journald[1133]: Journal started May 8 00:36:46.839406 systemd-journald[1133]: Runtime Journal (/run/log/journal/26060271184d4dcda3a45fc77ec6613c) is 6.0M, max 48.3M, 42.2M free. May 8 00:36:46.615204 systemd[1]: Queued start job for default target multi-user.target. May 8 00:36:46.629637 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 8 00:36:46.630115 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:36:46.843692 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:36:46.847401 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:36:46.848357 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:36:46.850168 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:36:46.851464 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:36:46.852605 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:36:46.853917 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:36:46.855217 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:36:46.856521 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:36:46.858096 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:36:46.859711 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:36:46.859885 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:36:46.861540 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:36:46.861734 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:36:46.863289 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:36:46.863464 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:36:46.864888 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:36:46.865055 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:36:46.866889 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:36:46.867062 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:36:46.868589 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:36:46.868788 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:36:46.870426 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:36:46.871887 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:36:46.873557 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:36:46.888041 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:36:46.899774 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:36:46.902228 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:36:46.912336 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:36:46.912372 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:36:46.914442 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 8 00:36:46.922859 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:36:46.926261 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:36:46.927450 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:36:46.930119 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:36:46.933438 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:36:46.934832 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:36:46.937020 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:36:46.937158 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:36:46.939320 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:36:46.942365 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:36:46.947885 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:36:46.950048 systemd-journald[1133]: Time spent on flushing to /var/log/journal/26060271184d4dcda3a45fc77ec6613c is 14.287ms for 998 entries. May 8 00:36:46.950048 systemd-journald[1133]: System Journal (/var/log/journal/26060271184d4dcda3a45fc77ec6613c) is 8.0M, max 195.6M, 187.6M free. May 8 00:36:47.002863 systemd-journald[1133]: Received client request to flush runtime journal. May 8 00:36:47.002922 kernel: loop0: detected capacity change from 0 to 210664 May 8 00:36:46.950989 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:36:46.951410 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:36:46.958510 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:36:46.960346 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:36:46.977872 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:36:46.980401 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:36:46.985645 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:36:46.989714 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 8 00:36:46.992836 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 8 00:36:47.002330 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:36:47.004407 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:36:47.023360 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:36:47.024943 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:36:47.033855 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:36:47.041507 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:36:47.043330 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 8 00:36:47.047711 kernel: loop1: detected capacity change from 0 to 142488 May 8 00:36:47.061492 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. May 8 00:36:47.061510 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. May 8 00:36:47.068361 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:36:47.087721 kernel: loop2: detected capacity change from 0 to 140768 May 8 00:36:47.136785 kernel: loop3: detected capacity change from 0 to 210664 May 8 00:36:47.146700 kernel: loop4: detected capacity change from 0 to 142488 May 8 00:36:47.155693 kernel: loop5: detected capacity change from 0 to 140768 May 8 00:36:47.169685 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 8 00:36:47.176541 (sd-merge)[1200]: Merged extensions into '/usr'. May 8 00:36:47.181067 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:36:47.181177 systemd[1]: Reloading... May 8 00:36:47.244884 zram_generator::config[1225]: No configuration found. May 8 00:36:47.328446 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:36:47.384937 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:36:47.435647 systemd[1]: Reloading finished in 253 ms. May 8 00:36:47.469876 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:36:47.471462 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:36:47.486822 systemd[1]: Starting ensure-sysext.service... May 8 00:36:47.488827 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:36:47.497989 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... May 8 00:36:47.498005 systemd[1]: Reloading... May 8 00:36:47.540721 zram_generator::config[1292]: No configuration found. May 8 00:36:47.542911 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:36:47.543301 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:36:47.544331 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:36:47.544624 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. May 8 00:36:47.545288 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. May 8 00:36:47.550091 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:36:47.550107 systemd-tmpfiles[1264]: Skipping /boot May 8 00:36:47.560945 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:36:47.561025 systemd-tmpfiles[1264]: Skipping /boot May 8 00:36:47.654982 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:36:47.704493 systemd[1]: Reloading finished in 206 ms. May 8 00:36:47.725227 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:36:47.746279 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:36:47.753220 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:36:47.755628 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:36:47.758010 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:36:47.764086 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:36:47.769929 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:36:47.772666 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:36:47.777865 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:36:47.780367 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:36:47.780536 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:36:47.785735 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:36:47.791729 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:36:47.795798 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:36:47.797086 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:36:47.797200 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:36:47.798124 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:36:47.798927 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:36:47.811239 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:36:47.813348 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:36:47.813537 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:36:47.815582 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:36:47.815988 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:36:47.817778 systemd-udevd[1338]: Using default interface naming scheme 'v255'. May 8 00:36:47.825611 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:36:47.828834 augenrules[1360]: No rules May 8 00:36:47.831941 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:36:47.833486 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:36:47.836968 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:36:47.837295 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:36:47.847772 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:36:47.850896 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:36:47.855919 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:36:47.858196 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:36:47.859387 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:36:47.860756 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:36:47.862734 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:36:47.863649 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:36:47.865538 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:36:47.867574 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:36:47.867785 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:36:47.879287 systemd[1]: Finished ensure-sysext.service. May 8 00:36:47.880775 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:36:47.880961 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:36:47.883579 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:36:47.883943 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:36:47.885490 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:36:47.887101 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:36:47.887288 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:36:47.914816 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:36:47.916030 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:36:47.916102 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:36:47.917698 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1389) May 8 00:36:47.918829 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:36:47.920305 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:36:47.920663 systemd-resolved[1334]: Positive Trust Anchors: May 8 00:36:47.920687 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:36:47.920695 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 8 00:36:47.920719 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:36:47.926605 systemd-resolved[1334]: Defaulting to hostname 'linux'. May 8 00:36:47.928950 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:36:47.931760 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:36:47.975445 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:36:47.979687 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 8 00:36:47.984895 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:36:47.988950 systemd-networkd[1403]: lo: Link UP May 8 00:36:47.989270 systemd-networkd[1403]: lo: Gained carrier May 8 00:36:47.993164 systemd-networkd[1403]: Enumeration completed May 8 00:36:47.993325 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:36:47.993452 systemd[1]: Reached target network.target - Network. May 8 00:36:47.996846 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:36:47.996856 systemd-networkd[1403]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:36:47.999171 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:36:47.999206 systemd-networkd[1403]: eth0: Link UP May 8 00:36:47.999211 systemd-networkd[1403]: eth0: Gained carrier May 8 00:36:47.999220 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:36:48.001792 kernel: ACPI: button: Power Button [PWRF] May 8 00:36:48.003947 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 8 00:36:48.003881 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:36:48.005494 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:36:48.014746 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 8 00:36:48.015017 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 8 00:36:48.015184 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 8 00:36:48.015359 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 8 00:36:48.021753 systemd-networkd[1403]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:36:49.189659 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:36:49.189741 systemd-timesyncd[1405]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 00:36:49.189783 systemd-timesyncd[1405]: Initial clock synchronization to Thu 2025-05-08 00:36:49.189630 UTC. May 8 00:36:49.189830 systemd-resolved[1334]: Clock change detected. Flushing caches. May 8 00:36:49.192706 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:36:49.203625 kernel: mousedev: PS/2 mouse device common for all mice May 8 00:36:49.207910 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:36:49.225346 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:36:49.225648 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:36:49.279681 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:36:49.287888 kernel: kvm_amd: TSC scaling supported May 8 00:36:49.287962 kernel: kvm_amd: Nested Virtualization enabled May 8 00:36:49.287976 kernel: kvm_amd: Nested Paging enabled May 8 00:36:49.288008 kernel: kvm_amd: LBR virtualization supported May 8 00:36:49.289024 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 8 00:36:49.289047 kernel: kvm_amd: Virtual GIF supported May 8 00:36:49.309609 kernel: EDAC MC: Ver: 3.0.0 May 8 00:36:49.339381 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:36:49.341021 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:36:49.362755 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:36:49.370940 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:36:49.400659 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:36:49.402223 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:36:49.403358 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:36:49.404574 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:36:49.405871 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:36:49.407334 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:36:49.408561 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:36:49.409916 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:36:49.411191 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:36:49.411224 systemd[1]: Reached target paths.target - Path Units. May 8 00:36:49.412149 systemd[1]: Reached target timers.target - Timer Units. May 8 00:36:49.413909 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:36:49.416584 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:36:49.425070 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:36:49.427452 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:36:49.429316 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:36:49.430522 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:36:49.431500 systemd[1]: Reached target basic.target - Basic System. May 8 00:36:49.432473 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:36:49.432501 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:36:49.433550 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:36:49.435651 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:36:49.439613 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:36:49.440023 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:36:49.443352 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:36:49.444698 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:36:49.446620 jq[1440]: false May 8 00:36:49.454755 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:36:49.458091 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 00:36:49.460803 dbus-daemon[1439]: [system] SELinux support is enabled May 8 00:36:49.463377 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:36:49.466005 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:36:49.471998 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:36:49.473432 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:36:49.473866 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:36:49.474413 extend-filesystems[1441]: Found loop3 May 8 00:36:49.476821 extend-filesystems[1441]: Found loop4 May 8 00:36:49.476821 extend-filesystems[1441]: Found loop5 May 8 00:36:49.476821 extend-filesystems[1441]: Found sr0 May 8 00:36:49.476821 extend-filesystems[1441]: Found vda May 8 00:36:49.476821 extend-filesystems[1441]: Found vda1 May 8 00:36:49.476821 extend-filesystems[1441]: Found vda2 May 8 00:36:49.476821 extend-filesystems[1441]: Found vda3 May 8 00:36:49.476821 extend-filesystems[1441]: Found usr May 8 00:36:49.476821 extend-filesystems[1441]: Found vda4 May 8 00:36:49.476821 extend-filesystems[1441]: Found vda6 May 8 00:36:49.476821 extend-filesystems[1441]: Found vda7 May 8 00:36:49.476821 extend-filesystems[1441]: Found vda9 May 8 00:36:49.476821 extend-filesystems[1441]: Checking size of /dev/vda9 May 8 00:36:49.508471 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1378) May 8 00:36:49.508564 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 00:36:49.474774 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:36:49.508699 extend-filesystems[1441]: Resized partition /dev/vda9 May 8 00:36:49.477939 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:36:49.509860 extend-filesystems[1462]: resize2fs 1.47.1 (20-May-2024) May 8 00:36:49.478936 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:36:49.484354 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:36:49.511151 jq[1455]: true May 8 00:36:49.494560 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:36:49.495027 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:36:49.496811 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:36:49.497040 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:36:49.503708 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:36:49.503908 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:36:49.515440 update_engine[1453]: I20250508 00:36:49.515314 1453 main.cc:92] Flatcar Update Engine starting May 8 00:36:49.523845 update_engine[1453]: I20250508 00:36:49.516732 1453 update_check_scheduler.cc:74] Next update check in 4m58s May 8 00:36:49.531111 jq[1466]: true May 8 00:36:49.539992 (ntainerd)[1467]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:36:49.551604 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 00:36:49.551659 tar[1464]: linux-amd64/helm May 8 00:36:49.549649 systemd[1]: Started update-engine.service - Update Engine. May 8 00:36:49.558069 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:36:49.558092 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:36:49.559445 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:36:49.559463 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:36:49.568758 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:36:49.608645 systemd-logind[1452]: Watching system buttons on /dev/input/event1 (Power Button) May 8 00:36:49.608672 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 00:36:49.610115 extend-filesystems[1462]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 00:36:49.610115 extend-filesystems[1462]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 00:36:49.610115 extend-filesystems[1462]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 00:36:49.616455 extend-filesystems[1441]: Resized filesystem in /dev/vda9 May 8 00:36:49.610743 systemd-logind[1452]: New seat seat0. May 8 00:36:49.611927 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:36:49.618216 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:36:49.618444 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:36:49.619395 locksmithd[1492]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:36:49.629017 sshd_keygen[1463]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:36:49.637297 bash[1493]: Updated "/home/core/.ssh/authorized_keys" May 8 00:36:49.639260 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:36:49.641403 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 00:36:49.653967 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:36:49.668852 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:36:49.677099 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:36:49.677309 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:36:49.687411 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:36:49.700651 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:36:49.708867 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:36:49.711853 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 8 00:36:49.713346 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:36:49.756574 containerd[1467]: time="2025-05-08T00:36:49.756337723Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 8 00:36:49.779858 containerd[1467]: time="2025-05-08T00:36:49.779818416Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:36:49.781616 containerd[1467]: time="2025-05-08T00:36:49.781547790Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:36:49.781616 containerd[1467]: time="2025-05-08T00:36:49.781605999Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:36:49.781725 containerd[1467]: time="2025-05-08T00:36:49.781627149Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:36:49.781837 containerd[1467]: time="2025-05-08T00:36:49.781812096Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:36:49.781837 containerd[1467]: time="2025-05-08T00:36:49.781834277Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:36:49.781920 containerd[1467]: time="2025-05-08T00:36:49.781901353Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:36:49.781920 containerd[1467]: time="2025-05-08T00:36:49.781917944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:36:49.782140 containerd[1467]: time="2025-05-08T00:36:49.782112930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:36:49.782140 containerd[1467]: time="2025-05-08T00:36:49.782131896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:36:49.782182 containerd[1467]: time="2025-05-08T00:36:49.782145782Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:36:49.782182 containerd[1467]: time="2025-05-08T00:36:49.782156502Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:36:49.782263 containerd[1467]: time="2025-05-08T00:36:49.782246581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:36:49.782500 containerd[1467]: time="2025-05-08T00:36:49.782475160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:36:49.782669 containerd[1467]: time="2025-05-08T00:36:49.782641141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:36:49.782669 containerd[1467]: time="2025-05-08T00:36:49.782659696Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:36:49.782781 containerd[1467]: time="2025-05-08T00:36:49.782757639Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:36:49.782838 containerd[1467]: time="2025-05-08T00:36:49.782822150Z" level=info msg="metadata content store policy set" policy=shared May 8 00:36:49.794862 containerd[1467]: time="2025-05-08T00:36:49.794811058Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:36:49.794899 containerd[1467]: time="2025-05-08T00:36:49.794874086Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:36:49.794899 containerd[1467]: time="2025-05-08T00:36:49.794892451Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:36:49.794935 containerd[1467]: time="2025-05-08T00:36:49.794910214Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:36:49.794935 containerd[1467]: time="2025-05-08T00:36:49.794926485Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:36:49.795132 containerd[1467]: time="2025-05-08T00:36:49.795102455Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:36:49.795376 containerd[1467]: time="2025-05-08T00:36:49.795345521Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:36:49.795481 containerd[1467]: time="2025-05-08T00:36:49.795454645Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:36:49.795481 containerd[1467]: time="2025-05-08T00:36:49.795477368Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:36:49.795538 containerd[1467]: time="2025-05-08T00:36:49.795491364Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:36:49.795538 containerd[1467]: time="2025-05-08T00:36:49.795504559Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:36:49.795538 containerd[1467]: time="2025-05-08T00:36:49.795516772Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:36:49.795538 containerd[1467]: time="2025-05-08T00:36:49.795536779Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:36:49.795629 containerd[1467]: time="2025-05-08T00:36:49.795550495Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:36:49.795629 containerd[1467]: time="2025-05-08T00:36:49.795564772Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:36:49.795629 containerd[1467]: time="2025-05-08T00:36:49.795577977Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:36:49.795629 containerd[1467]: time="2025-05-08T00:36:49.795608464Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:36:49.795629 containerd[1467]: time="2025-05-08T00:36:49.795619475Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:36:49.795724 containerd[1467]: time="2025-05-08T00:36:49.795639602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:36:49.795724 containerd[1467]: time="2025-05-08T00:36:49.795654120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:36:49.795724 containerd[1467]: time="2025-05-08T00:36:49.795665611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:36:49.795724 containerd[1467]: time="2025-05-08T00:36:49.795677764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:36:49.795724 containerd[1467]: time="2025-05-08T00:36:49.795691379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:36:49.795724 containerd[1467]: time="2025-05-08T00:36:49.795705045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:36:49.795724 containerd[1467]: time="2025-05-08T00:36:49.795716767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:36:49.795849 containerd[1467]: time="2025-05-08T00:36:49.795729260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:36:49.795849 containerd[1467]: time="2025-05-08T00:36:49.795747054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:36:49.795849 containerd[1467]: time="2025-05-08T00:36:49.795762002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:36:49.795849 containerd[1467]: time="2025-05-08T00:36:49.795774545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:36:49.795849 containerd[1467]: time="2025-05-08T00:36:49.795786849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:36:49.797854 containerd[1467]: time="2025-05-08T00:36:49.797820663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:36:49.797887 containerd[1467]: time="2025-05-08T00:36:49.797856801Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:36:49.797908 containerd[1467]: time="2025-05-08T00:36:49.797886016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:36:49.797938 containerd[1467]: time="2025-05-08T00:36:49.797902797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:36:49.797938 containerd[1467]: time="2025-05-08T00:36:49.797927033Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:36:49.798044 containerd[1467]: time="2025-05-08T00:36:49.798018765Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:36:49.798088 containerd[1467]: time="2025-05-08T00:36:49.798045094Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:36:49.798088 containerd[1467]: time="2025-05-08T00:36:49.798058740Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:36:49.798088 containerd[1467]: time="2025-05-08T00:36:49.798074540Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:36:49.798150 containerd[1467]: time="2025-05-08T00:36:49.798089047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:36:49.798150 containerd[1467]: time="2025-05-08T00:36:49.798105207Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:36:49.798150 containerd[1467]: time="2025-05-08T00:36:49.798116478Z" level=info msg="NRI interface is disabled by configuration." May 8 00:36:49.798150 containerd[1467]: time="2025-05-08T00:36:49.798131366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:36:49.798470 containerd[1467]: time="2025-05-08T00:36:49.798406823Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:36:49.798622 containerd[1467]: time="2025-05-08T00:36:49.798473037Z" level=info msg="Connect containerd service" May 8 00:36:49.798622 containerd[1467]: time="2025-05-08T00:36:49.798512140Z" level=info msg="using legacy CRI server" May 8 00:36:49.798622 containerd[1467]: time="2025-05-08T00:36:49.798522881Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:36:49.798686 containerd[1467]: time="2025-05-08T00:36:49.798630753Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:36:49.799295 containerd[1467]: time="2025-05-08T00:36:49.799257809Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:36:49.799584 containerd[1467]: time="2025-05-08T00:36:49.799440692Z" level=info msg="Start subscribing containerd event" May 8 00:36:49.799584 containerd[1467]: time="2025-05-08T00:36:49.799497909Z" level=info msg="Start recovering state" May 8 00:36:49.799655 containerd[1467]: time="2025-05-08T00:36:49.799634435Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:36:49.799878 containerd[1467]: time="2025-05-08T00:36:49.799849058Z" level=info msg="Start event monitor" May 8 00:36:49.799907 containerd[1467]: time="2025-05-08T00:36:49.799885056Z" level=info msg="Start snapshots syncer" May 8 00:36:49.799907 containerd[1467]: time="2025-05-08T00:36:49.799896307Z" level=info msg="Start cni network conf syncer for default" May 8 00:36:49.799943 containerd[1467]: time="2025-05-08T00:36:49.799905314Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:36:49.799974 containerd[1467]: time="2025-05-08T00:36:49.799905564Z" level=info msg="Start streaming server" May 8 00:36:49.800044 containerd[1467]: time="2025-05-08T00:36:49.800023305Z" level=info msg="containerd successfully booted in 0.044746s" May 8 00:36:49.800203 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:36:49.958844 tar[1464]: linux-amd64/LICENSE May 8 00:36:49.958844 tar[1464]: linux-amd64/README.md May 8 00:36:49.973897 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 00:36:50.944798 systemd-networkd[1403]: eth0: Gained IPv6LL May 8 00:36:50.947984 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:36:50.949840 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:36:50.960889 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 8 00:36:50.963317 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:36:50.965673 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:36:50.984362 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 00:36:50.984645 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 8 00:36:50.986260 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:36:50.990883 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:36:51.575846 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:36:51.577545 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:36:51.578878 systemd[1]: Startup finished in 883ms (kernel) + 6.358s (initrd) + 4.403s (userspace) = 11.646s. May 8 00:36:51.581241 (kubelet)[1552]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:36:52.016469 kubelet[1552]: E0508 00:36:52.016354 1552 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:36:52.020962 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:36:52.021228 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:36:54.139124 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:36:54.140377 systemd[1]: Started sshd@0-10.0.0.50:22-10.0.0.1:46610.service - OpenSSH per-connection server daemon (10.0.0.1:46610). May 8 00:36:54.181769 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 46610 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:36:54.183573 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:36:54.191479 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:36:54.202888 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:36:54.204742 systemd-logind[1452]: New session 1 of user core. May 8 00:36:54.213799 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:36:54.224007 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:36:54.226873 (systemd)[1571]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:36:54.335798 systemd[1571]: Queued start job for default target default.target. May 8 00:36:54.345883 systemd[1571]: Created slice app.slice - User Application Slice. May 8 00:36:54.345911 systemd[1571]: Reached target paths.target - Paths. May 8 00:36:54.345924 systemd[1571]: Reached target timers.target - Timers. May 8 00:36:54.347676 systemd[1571]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:36:54.362021 systemd[1571]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:36:54.362214 systemd[1571]: Reached target sockets.target - Sockets. May 8 00:36:54.362241 systemd[1571]: Reached target basic.target - Basic System. May 8 00:36:54.362298 systemd[1571]: Reached target default.target - Main User Target. May 8 00:36:54.362347 systemd[1571]: Startup finished in 128ms. May 8 00:36:54.362577 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:36:54.364071 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:36:54.423794 systemd[1]: Started sshd@1-10.0.0.50:22-10.0.0.1:46620.service - OpenSSH per-connection server daemon (10.0.0.1:46620). May 8 00:36:54.460575 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 46620 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:36:54.462614 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:36:54.467247 systemd-logind[1452]: New session 2 of user core. May 8 00:36:54.477738 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:36:54.532134 sshd[1582]: pam_unix(sshd:session): session closed for user core May 8 00:36:54.545411 systemd[1]: sshd@1-10.0.0.50:22-10.0.0.1:46620.service: Deactivated successfully. May 8 00:36:54.547125 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:36:54.548870 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. May 8 00:36:54.561915 systemd[1]: Started sshd@2-10.0.0.50:22-10.0.0.1:46624.service - OpenSSH per-connection server daemon (10.0.0.1:46624). May 8 00:36:54.562873 systemd-logind[1452]: Removed session 2. May 8 00:36:54.591312 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 46624 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:36:54.592905 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:36:54.596785 systemd-logind[1452]: New session 3 of user core. May 8 00:36:54.605738 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:36:54.655772 sshd[1589]: pam_unix(sshd:session): session closed for user core May 8 00:36:54.665377 systemd[1]: sshd@2-10.0.0.50:22-10.0.0.1:46624.service: Deactivated successfully. May 8 00:36:54.667115 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:36:54.668713 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. May 8 00:36:54.677814 systemd[1]: Started sshd@3-10.0.0.50:22-10.0.0.1:46626.service - OpenSSH per-connection server daemon (10.0.0.1:46626). May 8 00:36:54.678725 systemd-logind[1452]: Removed session 3. May 8 00:36:54.706862 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 46626 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:36:54.708294 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:36:54.711959 systemd-logind[1452]: New session 4 of user core. May 8 00:36:54.720708 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:36:54.774555 sshd[1596]: pam_unix(sshd:session): session closed for user core May 8 00:36:54.791000 systemd[1]: sshd@3-10.0.0.50:22-10.0.0.1:46626.service: Deactivated successfully. May 8 00:36:54.793039 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:36:54.794862 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. May 8 00:36:54.796240 systemd[1]: Started sshd@4-10.0.0.50:22-10.0.0.1:46636.service - OpenSSH per-connection server daemon (10.0.0.1:46636). May 8 00:36:54.797065 systemd-logind[1452]: Removed session 4. May 8 00:36:54.832074 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 46636 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:36:54.833650 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:36:54.837686 systemd-logind[1452]: New session 5 of user core. May 8 00:36:54.852696 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:36:54.999613 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:36:54.999953 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:36:55.016914 sudo[1606]: pam_unix(sudo:session): session closed for user root May 8 00:36:55.018978 sshd[1603]: pam_unix(sshd:session): session closed for user core May 8 00:36:55.026177 systemd[1]: sshd@4-10.0.0.50:22-10.0.0.1:46636.service: Deactivated successfully. May 8 00:36:55.027797 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:36:55.029160 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. May 8 00:36:55.030552 systemd[1]: Started sshd@5-10.0.0.50:22-10.0.0.1:46640.service - OpenSSH per-connection server daemon (10.0.0.1:46640). May 8 00:36:55.031300 systemd-logind[1452]: Removed session 5. May 8 00:36:55.063309 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 46640 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:36:55.064752 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:36:55.068439 systemd-logind[1452]: New session 6 of user core. May 8 00:36:55.074710 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:36:55.127327 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:36:55.127761 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:36:55.131253 sudo[1615]: pam_unix(sudo:session): session closed for user root May 8 00:36:55.137763 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 8 00:36:55.138111 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:36:55.157866 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 8 00:36:55.159934 auditctl[1618]: No rules May 8 00:36:55.161145 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:36:55.161452 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 8 00:36:55.163334 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:36:55.193991 augenrules[1636]: No rules May 8 00:36:55.195954 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:36:55.197318 sudo[1614]: pam_unix(sudo:session): session closed for user root May 8 00:36:55.199120 sshd[1611]: pam_unix(sshd:session): session closed for user core May 8 00:36:55.209579 systemd[1]: sshd@5-10.0.0.50:22-10.0.0.1:46640.service: Deactivated successfully. May 8 00:36:55.211229 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:36:55.212623 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. May 8 00:36:55.220837 systemd[1]: Started sshd@6-10.0.0.50:22-10.0.0.1:46648.service - OpenSSH per-connection server daemon (10.0.0.1:46648). May 8 00:36:55.221819 systemd-logind[1452]: Removed session 6. May 8 00:36:55.249289 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 46648 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:36:55.250818 sshd[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:36:55.254885 systemd-logind[1452]: New session 7 of user core. May 8 00:36:55.264707 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:36:55.317446 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:36:55.317786 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:36:55.682972 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 00:36:55.683207 (dockerd)[1667]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 00:36:55.940666 dockerd[1667]: time="2025-05-08T00:36:55.940503881Z" level=info msg="Starting up" May 8 00:36:56.399722 dockerd[1667]: time="2025-05-08T00:36:56.399614892Z" level=info msg="Loading containers: start." May 8 00:36:56.538616 kernel: Initializing XFRM netlink socket May 8 00:36:56.614833 systemd-networkd[1403]: docker0: Link UP May 8 00:36:56.640207 dockerd[1667]: time="2025-05-08T00:36:56.640161820Z" level=info msg="Loading containers: done." May 8 00:36:56.654077 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3378042214-merged.mount: Deactivated successfully. May 8 00:36:56.663267 dockerd[1667]: time="2025-05-08T00:36:56.663234868Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:36:56.663339 dockerd[1667]: time="2025-05-08T00:36:56.663317974Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 8 00:36:56.663444 dockerd[1667]: time="2025-05-08T00:36:56.663421007Z" level=info msg="Daemon has completed initialization" May 8 00:36:56.725321 dockerd[1667]: time="2025-05-08T00:36:56.725106812Z" level=info msg="API listen on /run/docker.sock" May 8 00:36:56.725450 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 00:36:57.431032 containerd[1467]: time="2025-05-08T00:36:57.430993604Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 8 00:36:58.777407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3523794992.mount: Deactivated successfully. May 8 00:37:02.136201 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:37:02.149816 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:37:02.320032 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:37:02.326001 (kubelet)[1885]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:37:02.825611 kubelet[1885]: E0508 00:37:02.825554 1885 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:37:02.832391 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:37:02.832603 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:37:02.984970 containerd[1467]: time="2025-05-08T00:37:02.984899644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:03.007182 containerd[1467]: time="2025-05-08T00:37:03.007106738Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 8 00:37:03.018163 containerd[1467]: time="2025-05-08T00:37:03.018136607Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:03.033812 containerd[1467]: time="2025-05-08T00:37:03.033771213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:03.034735 containerd[1467]: time="2025-05-08T00:37:03.034700396Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 5.603671817s" May 8 00:37:03.034735 containerd[1467]: time="2025-05-08T00:37:03.034733488Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 8 00:37:03.058463 containerd[1467]: time="2025-05-08T00:37:03.058418134Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 8 00:37:06.967975 containerd[1467]: time="2025-05-08T00:37:06.967907388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:06.990787 containerd[1467]: time="2025-05-08T00:37:06.990697897Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 8 00:37:06.999995 containerd[1467]: time="2025-05-08T00:37:06.999962304Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:07.018901 containerd[1467]: time="2025-05-08T00:37:07.018855362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:07.020060 containerd[1467]: time="2025-05-08T00:37:07.020020768Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 3.961562369s" May 8 00:37:07.020110 containerd[1467]: time="2025-05-08T00:37:07.020061034Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 8 00:37:07.044532 containerd[1467]: time="2025-05-08T00:37:07.044475378Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 8 00:37:09.253768 containerd[1467]: time="2025-05-08T00:37:09.253697096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:09.254508 containerd[1467]: time="2025-05-08T00:37:09.254475066Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 8 00:37:09.255854 containerd[1467]: time="2025-05-08T00:37:09.255817273Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:09.258473 containerd[1467]: time="2025-05-08T00:37:09.258432218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:09.259804 containerd[1467]: time="2025-05-08T00:37:09.259765038Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 2.215249725s" May 8 00:37:09.259848 containerd[1467]: time="2025-05-08T00:37:09.259806326Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 8 00:37:09.280784 containerd[1467]: time="2025-05-08T00:37:09.280749629Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 8 00:37:11.677693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4164265997.mount: Deactivated successfully. May 8 00:37:12.031540 containerd[1467]: time="2025-05-08T00:37:12.031406826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:12.032745 containerd[1467]: time="2025-05-08T00:37:12.032675886Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 8 00:37:12.034180 containerd[1467]: time="2025-05-08T00:37:12.034083617Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:12.038414 containerd[1467]: time="2025-05-08T00:37:12.038361431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:12.038996 containerd[1467]: time="2025-05-08T00:37:12.038949103Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 2.758164418s" May 8 00:37:12.039043 containerd[1467]: time="2025-05-08T00:37:12.038995330Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 8 00:37:12.061493 containerd[1467]: time="2025-05-08T00:37:12.061395035Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 00:37:12.667148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1828446105.mount: Deactivated successfully. May 8 00:37:12.886305 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:37:12.900029 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:37:13.091145 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:37:13.095401 (kubelet)[1955]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:37:13.478472 kubelet[1955]: E0508 00:37:13.478417 1955 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:37:13.483320 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:37:13.483607 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:37:13.934367 containerd[1467]: time="2025-05-08T00:37:13.934309367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:13.935112 containerd[1467]: time="2025-05-08T00:37:13.935069042Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 8 00:37:13.936118 containerd[1467]: time="2025-05-08T00:37:13.936091410Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:13.943276 containerd[1467]: time="2025-05-08T00:37:13.943228707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:13.944290 containerd[1467]: time="2025-05-08T00:37:13.944208024Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.882766021s" May 8 00:37:13.944339 containerd[1467]: time="2025-05-08T00:37:13.944289507Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 8 00:37:14.033496 containerd[1467]: time="2025-05-08T00:37:14.033450498Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 8 00:37:15.755167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1800158069.mount: Deactivated successfully. May 8 00:37:15.982631 containerd[1467]: time="2025-05-08T00:37:15.982564508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:15.995928 containerd[1467]: time="2025-05-08T00:37:15.995840470Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 8 00:37:16.011325 containerd[1467]: time="2025-05-08T00:37:16.011233483Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:16.058170 containerd[1467]: time="2025-05-08T00:37:16.058097487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:16.059098 containerd[1467]: time="2025-05-08T00:37:16.059055383Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 2.025555223s" May 8 00:37:16.059098 containerd[1467]: time="2025-05-08T00:37:16.059096711Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 8 00:37:16.082696 containerd[1467]: time="2025-05-08T00:37:16.082628500Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 8 00:37:17.628884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3918247629.mount: Deactivated successfully. May 8 00:37:20.333075 containerd[1467]: time="2025-05-08T00:37:20.333015040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:20.333926 containerd[1467]: time="2025-05-08T00:37:20.333893949Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 8 00:37:20.335291 containerd[1467]: time="2025-05-08T00:37:20.335245113Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:20.338759 containerd[1467]: time="2025-05-08T00:37:20.338708860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:37:20.339743 containerd[1467]: time="2025-05-08T00:37:20.339710880Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.257040732s" May 8 00:37:20.339743 containerd[1467]: time="2025-05-08T00:37:20.339743761Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 8 00:37:22.578000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:37:22.593860 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:37:22.612414 systemd[1]: Reloading requested from client PID 2138 ('systemctl') (unit session-7.scope)... May 8 00:37:22.612431 systemd[1]: Reloading... May 8 00:37:22.698655 zram_generator::config[2180]: No configuration found. May 8 00:37:23.269310 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:37:23.349247 systemd[1]: Reloading finished in 736 ms. May 8 00:37:23.397310 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 8 00:37:23.397410 systemd[1]: kubelet.service: Failed with result 'signal'. May 8 00:37:23.397689 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:37:23.399297 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:37:23.549652 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:37:23.554275 (kubelet)[2225]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:37:23.757026 kubelet[2225]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:37:23.757026 kubelet[2225]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:37:23.757026 kubelet[2225]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:37:23.757411 kubelet[2225]: I0508 00:37:23.757090 2225 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:37:24.081443 kubelet[2225]: I0508 00:37:24.081396 2225 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:37:24.081443 kubelet[2225]: I0508 00:37:24.081428 2225 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:37:24.081709 kubelet[2225]: I0508 00:37:24.081686 2225 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:37:24.126643 kubelet[2225]: I0508 00:37:24.126567 2225 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:37:24.140739 kubelet[2225]: E0508 00:37:24.140701 2225 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.50:6443: connect: connection refused May 8 00:37:24.161146 kubelet[2225]: I0508 00:37:24.161101 2225 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:37:24.166529 kubelet[2225]: I0508 00:37:24.166483 2225 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:37:24.166720 kubelet[2225]: I0508 00:37:24.166518 2225 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:37:24.167270 kubelet[2225]: I0508 00:37:24.167235 2225 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:37:24.167270 kubelet[2225]: I0508 00:37:24.167253 2225 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:37:24.167421 kubelet[2225]: I0508 00:37:24.167394 2225 state_mem.go:36] "Initialized new in-memory state store" May 8 00:37:24.174322 kubelet[2225]: I0508 00:37:24.174286 2225 kubelet.go:400] "Attempting to sync node with API server" May 8 00:37:24.174322 kubelet[2225]: I0508 00:37:24.174309 2225 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:37:24.174391 kubelet[2225]: I0508 00:37:24.174337 2225 kubelet.go:312] "Adding apiserver pod source" May 8 00:37:24.174391 kubelet[2225]: I0508 00:37:24.174358 2225 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:37:24.175032 kubelet[2225]: W0508 00:37:24.174864 2225 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 8 00:37:24.175032 kubelet[2225]: E0508 00:37:24.174954 2225 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 8 00:37:24.176502 kubelet[2225]: W0508 00:37:24.176438 2225 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 8 00:37:24.176502 kubelet[2225]: E0508 00:37:24.176494 2225 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 8 00:37:24.178145 kubelet[2225]: I0508 00:37:24.178106 2225 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:37:24.179546 kubelet[2225]: I0508 00:37:24.179515 2225 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:37:24.179624 kubelet[2225]: W0508 00:37:24.179574 2225 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:37:24.180271 kubelet[2225]: I0508 00:37:24.180252 2225 server.go:1264] "Started kubelet" May 8 00:37:24.181384 kubelet[2225]: I0508 00:37:24.181337 2225 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:37:24.181566 kubelet[2225]: I0508 00:37:24.181538 2225 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:37:24.181743 kubelet[2225]: I0508 00:37:24.181722 2225 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:37:24.181790 kubelet[2225]: I0508 00:37:24.181757 2225 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:37:24.182715 kubelet[2225]: I0508 00:37:24.182685 2225 server.go:455] "Adding debug handlers to kubelet server" May 8 00:37:24.188086 kubelet[2225]: E0508 00:37:24.187854 2225 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:37:24.188086 kubelet[2225]: I0508 00:37:24.187888 2225 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:37:24.188086 kubelet[2225]: I0508 00:37:24.187979 2225 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:37:24.188086 kubelet[2225]: I0508 00:37:24.188013 2225 reconciler.go:26] "Reconciler: start to sync state" May 8 00:37:24.188278 kubelet[2225]: W0508 00:37:24.188236 2225 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 8 00:37:24.188318 kubelet[2225]: E0508 00:37:24.188279 2225 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 8 00:37:24.188473 kubelet[2225]: E0508 00:37:24.188436 2225 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="200ms" May 8 00:37:24.188646 kubelet[2225]: E0508 00:37:24.188517 2225 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:37:24.190382 kubelet[2225]: I0508 00:37:24.190353 2225 factory.go:221] Registration of the containerd container factory successfully May 8 00:37:24.190382 kubelet[2225]: I0508 00:37:24.190367 2225 factory.go:221] Registration of the systemd container factory successfully May 8 00:37:24.190495 kubelet[2225]: I0508 00:37:24.190417 2225 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:37:24.201611 kubelet[2225]: E0508 00:37:24.201144 2225 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.50:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.50:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d663d5a18b0e6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:37:24.180226278 +0000 UTC m=+0.622188352,LastTimestamp:2025-05-08 00:37:24.180226278 +0000 UTC m=+0.622188352,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:37:24.246914 kubelet[2225]: I0508 00:37:24.246875 2225 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:37:24.248707 kubelet[2225]: I0508 00:37:24.248517 2225 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:37:24.248707 kubelet[2225]: I0508 00:37:24.248563 2225 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:37:24.249410 kubelet[2225]: W0508 00:37:24.249224 2225 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 8 00:37:24.249410 kubelet[2225]: E0508 00:37:24.249269 2225 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 8 00:37:24.249481 kubelet[2225]: I0508 00:37:24.249419 2225 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:37:24.249927 kubelet[2225]: E0508 00:37:24.249666 2225 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:37:24.251794 kubelet[2225]: I0508 00:37:24.251763 2225 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:37:24.251794 kubelet[2225]: I0508 00:37:24.251789 2225 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:37:24.251872 kubelet[2225]: I0508 00:37:24.251815 2225 state_mem.go:36] "Initialized new in-memory state store" May 8 00:37:24.289806 kubelet[2225]: I0508 00:37:24.289774 2225 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:37:24.290153 kubelet[2225]: E0508 00:37:24.290109 2225 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" May 8 00:37:24.350495 kubelet[2225]: E0508 00:37:24.350364 2225 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:37:24.389146 kubelet[2225]: E0508 00:37:24.389098 2225 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="400ms" May 8 00:37:24.491709 kubelet[2225]: I0508 00:37:24.491665 2225 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:37:24.492100 kubelet[2225]: E0508 00:37:24.492044 2225 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" May 8 00:37:24.551278 kubelet[2225]: E0508 00:37:24.551225 2225 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:37:24.789987 kubelet[2225]: E0508 00:37:24.789903 2225 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="800ms" May 8 00:37:24.893242 kubelet[2225]: I0508 00:37:24.893213 2225 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:37:24.893506 kubelet[2225]: E0508 00:37:24.893464 2225 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" May 8 00:37:24.951692 kubelet[2225]: E0508 00:37:24.951652 2225 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:37:25.070625 kubelet[2225]: W0508 00:37:25.070486 2225 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 8 00:37:25.070625 kubelet[2225]: E0508 00:37:25.070543 2225 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 8 00:37:25.343160 kubelet[2225]: W0508 00:37:25.343028 2225 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 8 00:37:25.343160 kubelet[2225]: E0508 00:37:25.343094 2225 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 8 00:37:25.573433 kubelet[2225]: W0508 00:37:25.573376 2225 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 8 00:37:25.573433 kubelet[2225]: E0508 00:37:25.573421 2225 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 8 00:37:25.591086 kubelet[2225]: E0508 00:37:25.591027 2225 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="1.6s" May 8 00:37:25.694989 kubelet[2225]: I0508 00:37:25.694948 2225 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:37:25.695270 kubelet[2225]: E0508 00:37:25.695235 2225 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" May 8 00:37:25.727743 kubelet[2225]: W0508 00:37:25.727679 2225 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 8 00:37:25.727743 kubelet[2225]: E0508 00:37:25.727732 2225 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 8 00:37:25.751922 kubelet[2225]: E0508 00:37:25.751823 2225 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:37:25.964315 kubelet[2225]: I0508 00:37:25.964197 2225 policy_none.go:49] "None policy: Start" May 8 00:37:25.965120 kubelet[2225]: I0508 00:37:25.965105 2225 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:37:25.965157 kubelet[2225]: I0508 00:37:25.965129 2225 state_mem.go:35] "Initializing new in-memory state store" May 8 00:37:26.268020 kubelet[2225]: E0508 00:37:26.267901 2225 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.50:6443: connect: connection refused May 8 00:37:26.584254 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:37:26.599957 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:37:26.603100 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:37:26.618518 kubelet[2225]: I0508 00:37:26.618490 2225 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:37:26.618781 kubelet[2225]: I0508 00:37:26.618725 2225 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:37:26.618874 kubelet[2225]: I0508 00:37:26.618858 2225 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:37:26.619835 kubelet[2225]: E0508 00:37:26.619821 2225 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:37:27.191641 kubelet[2225]: E0508 00:37:27.191569 2225 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="3.2s" May 8 00:37:27.297177 kubelet[2225]: I0508 00:37:27.297130 2225 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:37:27.297496 kubelet[2225]: E0508 00:37:27.297463 2225 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" May 8 00:37:27.352828 kubelet[2225]: I0508 00:37:27.352783 2225 topology_manager.go:215] "Topology Admit Handler" podUID="d3f6534d333f4a74b128f8d019b70c9d" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 00:37:27.353619 kubelet[2225]: I0508 00:37:27.353583 2225 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 00:37:27.354241 kubelet[2225]: I0508 00:37:27.354222 2225 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 00:37:27.360160 systemd[1]: Created slice kubepods-burstable-podd3f6534d333f4a74b128f8d019b70c9d.slice - libcontainer container kubepods-burstable-podd3f6534d333f4a74b128f8d019b70c9d.slice. May 8 00:37:27.372833 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 8 00:37:27.376375 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 8 00:37:27.402166 kubelet[2225]: I0508 00:37:27.402144 2225 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:37:27.402237 kubelet[2225]: I0508 00:37:27.402175 2225 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:37:27.402237 kubelet[2225]: I0508 00:37:27.402199 2225 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:37:27.402237 kubelet[2225]: I0508 00:37:27.402220 2225 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:37:27.402455 kubelet[2225]: I0508 00:37:27.402239 2225 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:37:27.402455 kubelet[2225]: I0508 00:37:27.402265 2225 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 00:37:27.402455 kubelet[2225]: I0508 00:37:27.402283 2225 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d3f6534d333f4a74b128f8d019b70c9d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d3f6534d333f4a74b128f8d019b70c9d\") " pod="kube-system/kube-apiserver-localhost" May 8 00:37:27.402455 kubelet[2225]: I0508 00:37:27.402300 2225 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d3f6534d333f4a74b128f8d019b70c9d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d3f6534d333f4a74b128f8d019b70c9d\") " pod="kube-system/kube-apiserver-localhost" May 8 00:37:27.402455 kubelet[2225]: I0508 00:37:27.402319 2225 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d3f6534d333f4a74b128f8d019b70c9d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d3f6534d333f4a74b128f8d019b70c9d\") " pod="kube-system/kube-apiserver-localhost" May 8 00:37:27.487690 kubelet[2225]: W0508 00:37:27.487578 2225 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 8 00:37:27.487690 kubelet[2225]: E0508 00:37:27.487637 2225 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 8 00:37:27.671102 kubelet[2225]: E0508 00:37:27.671052 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:27.671802 containerd[1467]: time="2025-05-08T00:37:27.671746492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d3f6534d333f4a74b128f8d019b70c9d,Namespace:kube-system,Attempt:0,}" May 8 00:37:27.674926 kubelet[2225]: E0508 00:37:27.674893 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:27.675208 containerd[1467]: time="2025-05-08T00:37:27.675172290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 8 00:37:27.678544 kubelet[2225]: E0508 00:37:27.678524 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:27.678994 containerd[1467]: time="2025-05-08T00:37:27.678958235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 8 00:37:28.002287 kubelet[2225]: W0508 00:37:28.002236 2225 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 8 00:37:28.002287 kubelet[2225]: E0508 00:37:28.002286 2225 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 8 00:37:28.048016 kubelet[2225]: W0508 00:37:28.047964 2225 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 8 00:37:28.048016 kubelet[2225]: E0508 00:37:28.048013 2225 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 8 00:37:28.072693 kubelet[2225]: W0508 00:37:28.072634 2225 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 8 00:37:28.072693 kubelet[2225]: E0508 00:37:28.072693 2225 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 8 00:37:29.318946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1885582653.mount: Deactivated successfully. May 8 00:37:29.415294 containerd[1467]: time="2025-05-08T00:37:29.415225987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:37:29.429955 containerd[1467]: time="2025-05-08T00:37:29.429887817Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:37:29.438522 containerd[1467]: time="2025-05-08T00:37:29.438488025Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:37:29.445904 containerd[1467]: time="2025-05-08T00:37:29.445873069Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:37:29.454204 containerd[1467]: time="2025-05-08T00:37:29.454144571Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:37:29.459274 containerd[1467]: time="2025-05-08T00:37:29.459225255Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:37:29.463712 containerd[1467]: time="2025-05-08T00:37:29.463627259Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 8 00:37:29.468555 containerd[1467]: time="2025-05-08T00:37:29.468509076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:37:29.469250 containerd[1467]: time="2025-05-08T00:37:29.469213076Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.793990911s" May 8 00:37:29.489556 containerd[1467]: time="2025-05-08T00:37:29.489515689Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.810467342s" May 8 00:37:29.490292 containerd[1467]: time="2025-05-08T00:37:29.490263343Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.818440664s" May 8 00:37:29.999535 containerd[1467]: time="2025-05-08T00:37:29.997562353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:37:29.999535 containerd[1467]: time="2025-05-08T00:37:29.998554212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:37:29.999535 containerd[1467]: time="2025-05-08T00:37:29.998569361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:29.999535 containerd[1467]: time="2025-05-08T00:37:29.998681404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:30.000218 containerd[1467]: time="2025-05-08T00:37:29.999966010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:37:30.000218 containerd[1467]: time="2025-05-08T00:37:30.000032086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:37:30.000218 containerd[1467]: time="2025-05-08T00:37:30.000045321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:30.000218 containerd[1467]: time="2025-05-08T00:37:30.000138128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:30.007468 containerd[1467]: time="2025-05-08T00:37:30.007338410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:37:30.007468 containerd[1467]: time="2025-05-08T00:37:30.007397302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:37:30.007468 containerd[1467]: time="2025-05-08T00:37:30.007418343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:30.007628 containerd[1467]: time="2025-05-08T00:37:30.007516899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:30.081503 systemd[1]: Started cri-containerd-da65913583733359b64af058f91ef549ef03ab00cc80c2babb23ee916963b594.scope - libcontainer container da65913583733359b64af058f91ef549ef03ab00cc80c2babb23ee916963b594. May 8 00:37:30.086620 systemd[1]: Started cri-containerd-92a048170b26b488139f26c940a06e4579e0f487d58e99aea484b5c51a4d6da4.scope - libcontainer container 92a048170b26b488139f26c940a06e4579e0f487d58e99aea484b5c51a4d6da4. May 8 00:37:30.091446 systemd[1]: Started cri-containerd-e69d22e4003a355b9e095822f401b59944b7d44e1ba9a2479bebdc398304cc4f.scope - libcontainer container e69d22e4003a355b9e095822f401b59944b7d44e1ba9a2479bebdc398304cc4f. May 8 00:37:30.143955 containerd[1467]: time="2025-05-08T00:37:30.143919192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"92a048170b26b488139f26c940a06e4579e0f487d58e99aea484b5c51a4d6da4\"" May 8 00:37:30.145345 kubelet[2225]: E0508 00:37:30.145321 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:30.150137 containerd[1467]: time="2025-05-08T00:37:30.150048945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"da65913583733359b64af058f91ef549ef03ab00cc80c2babb23ee916963b594\"" May 8 00:37:30.150755 containerd[1467]: time="2025-05-08T00:37:30.150734919Z" level=info msg="CreateContainer within sandbox \"92a048170b26b488139f26c940a06e4579e0f487d58e99aea484b5c51a4d6da4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:37:30.150801 kubelet[2225]: E0508 00:37:30.150756 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:30.152483 containerd[1467]: time="2025-05-08T00:37:30.152450303Z" level=info msg="CreateContainer within sandbox \"da65913583733359b64af058f91ef549ef03ab00cc80c2babb23ee916963b594\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:37:30.170716 containerd[1467]: time="2025-05-08T00:37:30.170651242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d3f6534d333f4a74b128f8d019b70c9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e69d22e4003a355b9e095822f401b59944b7d44e1ba9a2479bebdc398304cc4f\"" May 8 00:37:30.171548 kubelet[2225]: E0508 00:37:30.171500 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:30.173472 containerd[1467]: time="2025-05-08T00:37:30.173434216Z" level=info msg="CreateContainer within sandbox \"e69d22e4003a355b9e095822f401b59944b7d44e1ba9a2479bebdc398304cc4f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:37:30.382009 containerd[1467]: time="2025-05-08T00:37:30.381847915Z" level=info msg="CreateContainer within sandbox \"da65913583733359b64af058f91ef549ef03ab00cc80c2babb23ee916963b594\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"16e9d7b40bf883c6faf61f70f898fd18b2606a86ac2584cfb9fad3f7e9c87d08\"" May 8 00:37:30.382988 containerd[1467]: time="2025-05-08T00:37:30.382946364Z" level=info msg="StartContainer for \"16e9d7b40bf883c6faf61f70f898fd18b2606a86ac2584cfb9fad3f7e9c87d08\"" May 8 00:37:30.392597 kubelet[2225]: E0508 00:37:30.392535 2225 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="6.4s" May 8 00:37:30.416722 systemd[1]: Started cri-containerd-16e9d7b40bf883c6faf61f70f898fd18b2606a86ac2584cfb9fad3f7e9c87d08.scope - libcontainer container 16e9d7b40bf883c6faf61f70f898fd18b2606a86ac2584cfb9fad3f7e9c87d08. May 8 00:37:30.430429 kubelet[2225]: E0508 00:37:30.430395 2225 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.50:6443: connect: connection refused May 8 00:37:30.441667 containerd[1467]: time="2025-05-08T00:37:30.441624853Z" level=info msg="CreateContainer within sandbox \"92a048170b26b488139f26c940a06e4579e0f487d58e99aea484b5c51a4d6da4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"070bc54200078f8c5b30e086dc11386f35d8c3915c1197c697187788efe99d87\"" May 8 00:37:30.442766 containerd[1467]: time="2025-05-08T00:37:30.442525887Z" level=info msg="StartContainer for \"070bc54200078f8c5b30e086dc11386f35d8c3915c1197c697187788efe99d87\"" May 8 00:37:30.471719 systemd[1]: Started cri-containerd-070bc54200078f8c5b30e086dc11386f35d8c3915c1197c697187788efe99d87.scope - libcontainer container 070bc54200078f8c5b30e086dc11386f35d8c3915c1197c697187788efe99d87. May 8 00:37:30.489186 containerd[1467]: time="2025-05-08T00:37:30.489140954Z" level=info msg="StartContainer for \"16e9d7b40bf883c6faf61f70f898fd18b2606a86ac2584cfb9fad3f7e9c87d08\" returns successfully" May 8 00:37:30.489332 containerd[1467]: time="2025-05-08T00:37:30.489152507Z" level=info msg="CreateContainer within sandbox \"e69d22e4003a355b9e095822f401b59944b7d44e1ba9a2479bebdc398304cc4f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2a51706885f04d63386ab8e51a06d8e5d2a4781b7f8e5eade41617764b36b34d\"" May 8 00:37:30.490822 containerd[1467]: time="2025-05-08T00:37:30.490664232Z" level=info msg="StartContainer for \"2a51706885f04d63386ab8e51a06d8e5d2a4781b7f8e5eade41617764b36b34d\"" May 8 00:37:30.499572 kubelet[2225]: I0508 00:37:30.499537 2225 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:37:30.500714 kubelet[2225]: E0508 00:37:30.500640 2225 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" May 8 00:37:30.525406 containerd[1467]: time="2025-05-08T00:37:30.525324555Z" level=info msg="StartContainer for \"070bc54200078f8c5b30e086dc11386f35d8c3915c1197c697187788efe99d87\" returns successfully" May 8 00:37:30.528918 systemd[1]: Started cri-containerd-2a51706885f04d63386ab8e51a06d8e5d2a4781b7f8e5eade41617764b36b34d.scope - libcontainer container 2a51706885f04d63386ab8e51a06d8e5d2a4781b7f8e5eade41617764b36b34d. May 8 00:37:30.586187 containerd[1467]: time="2025-05-08T00:37:30.586130432Z" level=info msg="StartContainer for \"2a51706885f04d63386ab8e51a06d8e5d2a4781b7f8e5eade41617764b36b34d\" returns successfully" May 8 00:37:31.348299 kubelet[2225]: E0508 00:37:31.348263 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:31.351628 kubelet[2225]: E0508 00:37:31.351577 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:31.353793 kubelet[2225]: E0508 00:37:31.353314 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:32.355146 kubelet[2225]: E0508 00:37:32.355090 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:32.355573 kubelet[2225]: E0508 00:37:32.355359 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:32.355573 kubelet[2225]: E0508 00:37:32.355432 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:32.740149 kubelet[2225]: E0508 00:37:32.740090 2225 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 8 00:37:33.356508 kubelet[2225]: E0508 00:37:33.356463 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:33.643653 kubelet[2225]: E0508 00:37:33.643616 2225 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 8 00:37:34.181345 kubelet[2225]: I0508 00:37:34.181305 2225 apiserver.go:52] "Watching apiserver" May 8 00:37:34.188947 kubelet[2225]: I0508 00:37:34.188909 2225 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:37:34.305111 kubelet[2225]: E0508 00:37:34.305067 2225 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 8 00:37:34.936035 update_engine[1453]: I20250508 00:37:34.935941 1453 update_attempter.cc:509] Updating boot flags... May 8 00:37:34.986636 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2508) May 8 00:37:35.021674 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2506) May 8 00:37:35.055632 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2506) May 8 00:37:35.365134 kubelet[2225]: E0508 00:37:35.364998 2225 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 8 00:37:36.619952 kubelet[2225]: E0508 00:37:36.619914 2225 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:37:36.837648 kubelet[2225]: E0508 00:37:36.837607 2225 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 8 00:37:36.902173 kubelet[2225]: I0508 00:37:36.902147 2225 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:37:36.963748 kubelet[2225]: I0508 00:37:36.963710 2225 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 00:37:37.952083 kubelet[2225]: E0508 00:37:37.952041 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:38.304164 systemd[1]: Reloading requested from client PID 2518 ('systemctl') (unit session-7.scope)... May 8 00:37:38.304183 systemd[1]: Reloading... May 8 00:37:38.362938 kubelet[2225]: E0508 00:37:38.362439 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:38.391096 zram_generator::config[2558]: No configuration found. May 8 00:37:38.502638 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:37:38.596842 systemd[1]: Reloading finished in 292 ms. May 8 00:37:38.643402 kubelet[2225]: I0508 00:37:38.643358 2225 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:37:38.643473 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:37:38.665165 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:37:38.665461 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:37:38.665514 systemd[1]: kubelet.service: Consumed 1.117s CPU time, 118.3M memory peak, 0B memory swap peak. May 8 00:37:38.671812 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:37:38.813005 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:37:38.818528 (kubelet)[2602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:37:38.862566 kubelet[2602]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:37:38.862566 kubelet[2602]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:37:38.862566 kubelet[2602]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:37:38.862566 kubelet[2602]: I0508 00:37:38.862532 2602 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:37:38.867117 kubelet[2602]: I0508 00:37:38.867080 2602 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:37:38.867117 kubelet[2602]: I0508 00:37:38.867109 2602 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:37:38.867330 kubelet[2602]: I0508 00:37:38.867313 2602 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:37:38.868543 kubelet[2602]: I0508 00:37:38.868519 2602 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:37:38.869732 kubelet[2602]: I0508 00:37:38.869580 2602 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:37:38.878131 kubelet[2602]: I0508 00:37:38.878100 2602 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:37:38.878547 kubelet[2602]: I0508 00:37:38.878317 2602 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:37:38.878635 kubelet[2602]: I0508 00:37:38.878346 2602 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:37:38.878723 kubelet[2602]: I0508 00:37:38.878653 2602 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:37:38.878723 kubelet[2602]: I0508 00:37:38.878665 2602 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:37:38.878723 kubelet[2602]: I0508 00:37:38.878713 2602 state_mem.go:36] "Initialized new in-memory state store" May 8 00:37:38.878848 kubelet[2602]: I0508 00:37:38.878818 2602 kubelet.go:400] "Attempting to sync node with API server" May 8 00:37:38.878848 kubelet[2602]: I0508 00:37:38.878834 2602 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:37:38.878982 kubelet[2602]: I0508 00:37:38.878866 2602 kubelet.go:312] "Adding apiserver pod source" May 8 00:37:38.878982 kubelet[2602]: I0508 00:37:38.878886 2602 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:37:38.880023 kubelet[2602]: I0508 00:37:38.879994 2602 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:37:38.880241 kubelet[2602]: I0508 00:37:38.880221 2602 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:37:38.881116 kubelet[2602]: I0508 00:37:38.881094 2602 server.go:1264] "Started kubelet" May 8 00:37:38.881488 kubelet[2602]: I0508 00:37:38.881448 2602 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:37:38.885105 kubelet[2602]: I0508 00:37:38.883277 2602 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:37:38.885105 kubelet[2602]: I0508 00:37:38.884453 2602 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:37:38.885105 kubelet[2602]: I0508 00:37:38.884881 2602 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:37:38.885446 kubelet[2602]: I0508 00:37:38.885422 2602 server.go:455] "Adding debug handlers to kubelet server" May 8 00:37:38.886999 kubelet[2602]: I0508 00:37:38.886977 2602 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:37:38.887079 kubelet[2602]: I0508 00:37:38.887061 2602 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:37:38.887219 kubelet[2602]: I0508 00:37:38.887195 2602 reconciler.go:26] "Reconciler: start to sync state" May 8 00:37:38.890476 kubelet[2602]: I0508 00:37:38.889335 2602 factory.go:221] Registration of the systemd container factory successfully May 8 00:37:38.890476 kubelet[2602]: I0508 00:37:38.889430 2602 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:37:38.891725 kubelet[2602]: I0508 00:37:38.891700 2602 factory.go:221] Registration of the containerd container factory successfully May 8 00:37:38.894217 kubelet[2602]: E0508 00:37:38.894190 2602 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:37:38.896612 kubelet[2602]: I0508 00:37:38.896573 2602 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:37:38.898049 kubelet[2602]: I0508 00:37:38.898035 2602 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:37:38.898136 kubelet[2602]: I0508 00:37:38.898125 2602 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:37:38.898193 kubelet[2602]: I0508 00:37:38.898185 2602 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:37:38.898292 kubelet[2602]: E0508 00:37:38.898276 2602 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:37:38.930894 kubelet[2602]: I0508 00:37:38.930817 2602 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:37:38.930894 kubelet[2602]: I0508 00:37:38.930836 2602 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:37:38.930894 kubelet[2602]: I0508 00:37:38.930862 2602 state_mem.go:36] "Initialized new in-memory state store" May 8 00:37:38.931068 kubelet[2602]: I0508 00:37:38.931004 2602 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:37:38.931068 kubelet[2602]: I0508 00:37:38.931014 2602 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:37:38.931068 kubelet[2602]: I0508 00:37:38.931032 2602 policy_none.go:49] "None policy: Start" May 8 00:37:38.931567 kubelet[2602]: I0508 00:37:38.931551 2602 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:37:38.931657 kubelet[2602]: I0508 00:37:38.931648 2602 state_mem.go:35] "Initializing new in-memory state store" May 8 00:37:38.931907 kubelet[2602]: I0508 00:37:38.931894 2602 state_mem.go:75] "Updated machine memory state" May 8 00:37:38.936069 kubelet[2602]: I0508 00:37:38.936043 2602 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:37:38.936345 kubelet[2602]: I0508 00:37:38.936238 2602 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:37:38.936385 kubelet[2602]: I0508 00:37:38.936358 2602 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:37:38.991076 kubelet[2602]: I0508 00:37:38.991050 2602 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:37:38.999404 kubelet[2602]: I0508 00:37:38.999363 2602 topology_manager.go:215] "Topology Admit Handler" podUID="d3f6534d333f4a74b128f8d019b70c9d" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 00:37:38.999509 kubelet[2602]: I0508 00:37:38.999470 2602 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 00:37:38.999535 kubelet[2602]: I0508 00:37:38.999524 2602 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 00:37:39.036937 kubelet[2602]: E0508 00:37:39.036883 2602 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 8 00:37:39.047959 kubelet[2602]: E0508 00:37:39.047904 2602 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:37:39.048610 kubelet[2602]: I0508 00:37:39.048563 2602 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 8 00:37:39.048681 kubelet[2602]: I0508 00:37:39.048657 2602 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 00:37:39.101739 sudo[2637]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 8 00:37:39.102142 sudo[2637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 8 00:37:39.188362 kubelet[2602]: I0508 00:37:39.188326 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:37:39.188362 kubelet[2602]: I0508 00:37:39.188357 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d3f6534d333f4a74b128f8d019b70c9d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d3f6534d333f4a74b128f8d019b70c9d\") " pod="kube-system/kube-apiserver-localhost" May 8 00:37:39.188494 kubelet[2602]: I0508 00:37:39.188376 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d3f6534d333f4a74b128f8d019b70c9d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d3f6534d333f4a74b128f8d019b70c9d\") " pod="kube-system/kube-apiserver-localhost" May 8 00:37:39.188494 kubelet[2602]: I0508 00:37:39.188393 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:37:39.188494 kubelet[2602]: I0508 00:37:39.188409 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:37:39.188494 kubelet[2602]: I0508 00:37:39.188426 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:37:39.188494 kubelet[2602]: I0508 00:37:39.188443 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d3f6534d333f4a74b128f8d019b70c9d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d3f6534d333f4a74b128f8d019b70c9d\") " pod="kube-system/kube-apiserver-localhost" May 8 00:37:39.188622 kubelet[2602]: I0508 00:37:39.188463 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:37:39.188622 kubelet[2602]: I0508 00:37:39.188478 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 00:37:39.333188 kubelet[2602]: E0508 00:37:39.333146 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:39.337487 kubelet[2602]: E0508 00:37:39.337469 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:39.349238 kubelet[2602]: E0508 00:37:39.349210 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:39.558068 sudo[2637]: pam_unix(sudo:session): session closed for user root May 8 00:37:39.882579 kubelet[2602]: I0508 00:37:39.882470 2602 apiserver.go:52] "Watching apiserver" May 8 00:37:39.912111 kubelet[2602]: E0508 00:37:39.911635 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:39.912111 kubelet[2602]: E0508 00:37:39.911981 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:39.945691 kubelet[2602]: E0508 00:37:39.945663 2602 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 8 00:37:39.946099 kubelet[2602]: E0508 00:37:39.946083 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:39.987370 kubelet[2602]: I0508 00:37:39.987330 2602 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:37:40.031712 kubelet[2602]: I0508 00:37:40.031642 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.031619656 podStartE2EDuration="1.031619656s" podCreationTimestamp="2025-05-08 00:37:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:37:39.945506759 +0000 UTC m=+1.123077183" watchObservedRunningTime="2025-05-08 00:37:40.031619656 +0000 UTC m=+1.209190080" May 8 00:37:40.059821 kubelet[2602]: I0508 00:37:40.059633 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.0596134680000002 podStartE2EDuration="2.059613468s" podCreationTimestamp="2025-05-08 00:37:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:37:40.031815125 +0000 UTC m=+1.209385549" watchObservedRunningTime="2025-05-08 00:37:40.059613468 +0000 UTC m=+1.237183902" May 8 00:37:40.111907 kubelet[2602]: I0508 00:37:40.111823 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.111800541 podStartE2EDuration="3.111800541s" podCreationTimestamp="2025-05-08 00:37:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:37:40.059801224 +0000 UTC m=+1.237371648" watchObservedRunningTime="2025-05-08 00:37:40.111800541 +0000 UTC m=+1.289370965" May 8 00:37:40.912213 kubelet[2602]: E0508 00:37:40.912185 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:40.912675 kubelet[2602]: E0508 00:37:40.912377 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:41.514884 sudo[1647]: pam_unix(sudo:session): session closed for user root May 8 00:37:41.517724 sshd[1644]: pam_unix(sshd:session): session closed for user core May 8 00:37:41.522070 systemd[1]: sshd@6-10.0.0.50:22-10.0.0.1:46648.service: Deactivated successfully. May 8 00:37:41.524109 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:37:41.524359 systemd[1]: session-7.scope: Consumed 4.684s CPU time, 189.0M memory peak, 0B memory swap peak. May 8 00:37:41.524829 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. May 8 00:37:41.525599 systemd-logind[1452]: Removed session 7. May 8 00:37:42.021778 kubelet[2602]: E0508 00:37:42.021747 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:42.914543 kubelet[2602]: E0508 00:37:42.914512 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:43.915923 kubelet[2602]: E0508 00:37:43.915890 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:46.114295 kubelet[2602]: E0508 00:37:46.114255 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:46.918477 kubelet[2602]: E0508 00:37:46.918441 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:49.374466 kubelet[2602]: E0508 00:37:49.374433 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:51.218281 kubelet[2602]: I0508 00:37:51.218250 2602 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:37:51.243935 containerd[1467]: time="2025-05-08T00:37:51.243898566Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:37:51.244251 kubelet[2602]: I0508 00:37:51.244103 2602 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:37:51.828622 kubelet[2602]: I0508 00:37:51.826876 2602 topology_manager.go:215] "Topology Admit Handler" podUID="d383a50c-159e-4a0b-a7ff-e6e2fcc1daf4" podNamespace="kube-system" podName="kube-proxy-hvjn4" May 8 00:37:51.833939 kubelet[2602]: I0508 00:37:51.833893 2602 topology_manager.go:215] "Topology Admit Handler" podUID="7e6b0833-a46b-4079-8f93-52198b99baed" podNamespace="kube-system" podName="cilium-jgkw9" May 8 00:37:51.841771 systemd[1]: Created slice kubepods-besteffort-podd383a50c_159e_4a0b_a7ff_e6e2fcc1daf4.slice - libcontainer container kubepods-besteffort-podd383a50c_159e_4a0b_a7ff_e6e2fcc1daf4.slice. May 8 00:37:51.862198 systemd[1]: Created slice kubepods-burstable-pod7e6b0833_a46b_4079_8f93_52198b99baed.slice - libcontainer container kubepods-burstable-pod7e6b0833_a46b_4079_8f93_52198b99baed.slice. May 8 00:37:51.870886 kubelet[2602]: I0508 00:37:51.870851 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-etc-cni-netd\") pod \"cilium-jgkw9\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " pod="kube-system/cilium-jgkw9" May 8 00:37:51.870886 kubelet[2602]: I0508 00:37:51.870886 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7e6b0833-a46b-4079-8f93-52198b99baed-hubble-tls\") pod \"cilium-jgkw9\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " pod="kube-system/cilium-jgkw9" May 8 00:37:51.870990 kubelet[2602]: I0508 00:37:51.870906 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt9r9\" (UniqueName: \"kubernetes.io/projected/d383a50c-159e-4a0b-a7ff-e6e2fcc1daf4-kube-api-access-wt9r9\") pod \"kube-proxy-hvjn4\" (UID: \"d383a50c-159e-4a0b-a7ff-e6e2fcc1daf4\") " pod="kube-system/kube-proxy-hvjn4" May 8 00:37:51.870990 kubelet[2602]: I0508 00:37:51.870925 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-xtables-lock\") pod \"cilium-jgkw9\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " pod="kube-system/cilium-jgkw9" May 8 00:37:51.870990 kubelet[2602]: I0508 00:37:51.870940 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-host-proc-sys-kernel\") pod \"cilium-jgkw9\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " pod="kube-system/cilium-jgkw9" May 8 00:37:51.870990 kubelet[2602]: I0508 00:37:51.870954 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-cilium-cgroup\") pod \"cilium-jgkw9\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " pod="kube-system/cilium-jgkw9" May 8 00:37:51.870990 kubelet[2602]: I0508 00:37:51.870972 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7e6b0833-a46b-4079-8f93-52198b99baed-clustermesh-secrets\") pod \"cilium-jgkw9\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " pod="kube-system/cilium-jgkw9" May 8 00:37:51.871108 kubelet[2602]: I0508 00:37:51.870989 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7e6b0833-a46b-4079-8f93-52198b99baed-cilium-config-path\") pod \"cilium-jgkw9\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " pod="kube-system/cilium-jgkw9" May 8 00:37:51.871108 kubelet[2602]: I0508 00:37:51.871005 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d383a50c-159e-4a0b-a7ff-e6e2fcc1daf4-kube-proxy\") pod \"kube-proxy-hvjn4\" (UID: \"d383a50c-159e-4a0b-a7ff-e6e2fcc1daf4\") " pod="kube-system/kube-proxy-hvjn4" May 8 00:37:51.871108 kubelet[2602]: I0508 00:37:51.871020 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-host-proc-sys-net\") pod \"cilium-jgkw9\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " pod="kube-system/cilium-jgkw9" May 8 00:37:51.871108 kubelet[2602]: I0508 00:37:51.871083 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns7cb\" (UniqueName: \"kubernetes.io/projected/7e6b0833-a46b-4079-8f93-52198b99baed-kube-api-access-ns7cb\") pod \"cilium-jgkw9\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " pod="kube-system/cilium-jgkw9" May 8 00:37:51.871202 kubelet[2602]: I0508 00:37:51.871112 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-cilium-run\") pod \"cilium-jgkw9\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " pod="kube-system/cilium-jgkw9" May 8 00:37:51.871202 kubelet[2602]: I0508 00:37:51.871132 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-bpf-maps\") pod \"cilium-jgkw9\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " pod="kube-system/cilium-jgkw9" May 8 00:37:51.871202 kubelet[2602]: I0508 00:37:51.871157 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-cni-path\") pod \"cilium-jgkw9\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " pod="kube-system/cilium-jgkw9" May 8 00:37:51.871202 kubelet[2602]: I0508 00:37:51.871183 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-hostproc\") pod \"cilium-jgkw9\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " pod="kube-system/cilium-jgkw9" May 8 00:37:51.871202 kubelet[2602]: I0508 00:37:51.871197 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-lib-modules\") pod \"cilium-jgkw9\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " pod="kube-system/cilium-jgkw9" May 8 00:37:51.871317 kubelet[2602]: I0508 00:37:51.871215 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d383a50c-159e-4a0b-a7ff-e6e2fcc1daf4-xtables-lock\") pod \"kube-proxy-hvjn4\" (UID: \"d383a50c-159e-4a0b-a7ff-e6e2fcc1daf4\") " pod="kube-system/kube-proxy-hvjn4" May 8 00:37:51.871317 kubelet[2602]: I0508 00:37:51.871237 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d383a50c-159e-4a0b-a7ff-e6e2fcc1daf4-lib-modules\") pod \"kube-proxy-hvjn4\" (UID: \"d383a50c-159e-4a0b-a7ff-e6e2fcc1daf4\") " pod="kube-system/kube-proxy-hvjn4" May 8 00:37:52.159391 kubelet[2602]: E0508 00:37:52.159341 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:52.160059 containerd[1467]: time="2025-05-08T00:37:52.160014889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hvjn4,Uid:d383a50c-159e-4a0b-a7ff-e6e2fcc1daf4,Namespace:kube-system,Attempt:0,}" May 8 00:37:52.164821 kubelet[2602]: E0508 00:37:52.164803 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:52.165112 containerd[1467]: time="2025-05-08T00:37:52.165076697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jgkw9,Uid:7e6b0833-a46b-4079-8f93-52198b99baed,Namespace:kube-system,Attempt:0,}" May 8 00:37:52.192120 containerd[1467]: time="2025-05-08T00:37:52.192017495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:37:52.192120 containerd[1467]: time="2025-05-08T00:37:52.192079151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:37:52.192120 containerd[1467]: time="2025-05-08T00:37:52.192089120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:52.192343 containerd[1467]: time="2025-05-08T00:37:52.192170072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:52.194557 containerd[1467]: time="2025-05-08T00:37:52.194327392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:37:52.194557 containerd[1467]: time="2025-05-08T00:37:52.194432370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:37:52.194557 containerd[1467]: time="2025-05-08T00:37:52.194460142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:52.194737 containerd[1467]: time="2025-05-08T00:37:52.194550954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:52.210758 systemd[1]: Started cri-containerd-08cff6fad5edcdcc92c5f5f56c36ab9dbb32b3769450ed5ea01725593f0e4f8f.scope - libcontainer container 08cff6fad5edcdcc92c5f5f56c36ab9dbb32b3769450ed5ea01725593f0e4f8f. May 8 00:37:52.215227 systemd[1]: Started cri-containerd-0d09dc6f89fae5588e6e0526480c9eadfa8324e52aa6355a5c70f12a172dda6e.scope - libcontainer container 0d09dc6f89fae5588e6e0526480c9eadfa8324e52aa6355a5c70f12a172dda6e. May 8 00:37:52.222616 kubelet[2602]: I0508 00:37:52.221030 2602 topology_manager.go:215] "Topology Admit Handler" podUID="ee8ef9a6-fa83-4916-a5c4-e5ce80f08a3b" podNamespace="kube-system" podName="cilium-operator-599987898-7q85s" May 8 00:37:52.230058 systemd[1]: Created slice kubepods-besteffort-podee8ef9a6_fa83_4916_a5c4_e5ce80f08a3b.slice - libcontainer container kubepods-besteffort-podee8ef9a6_fa83_4916_a5c4_e5ce80f08a3b.slice. May 8 00:37:52.250981 containerd[1467]: time="2025-05-08T00:37:52.250932485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hvjn4,Uid:d383a50c-159e-4a0b-a7ff-e6e2fcc1daf4,Namespace:kube-system,Attempt:0,} returns sandbox id \"08cff6fad5edcdcc92c5f5f56c36ab9dbb32b3769450ed5ea01725593f0e4f8f\"" May 8 00:37:52.251709 kubelet[2602]: E0508 00:37:52.251674 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:52.255872 containerd[1467]: time="2025-05-08T00:37:52.255812420Z" level=info msg="CreateContainer within sandbox \"08cff6fad5edcdcc92c5f5f56c36ab9dbb32b3769450ed5ea01725593f0e4f8f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:37:52.258821 containerd[1467]: time="2025-05-08T00:37:52.258774705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jgkw9,Uid:7e6b0833-a46b-4079-8f93-52198b99baed,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d09dc6f89fae5588e6e0526480c9eadfa8324e52aa6355a5c70f12a172dda6e\"" May 8 00:37:52.259506 kubelet[2602]: E0508 00:37:52.259459 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:52.262080 containerd[1467]: time="2025-05-08T00:37:52.262046092Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 00:37:52.273695 kubelet[2602]: I0508 00:37:52.273654 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee8ef9a6-fa83-4916-a5c4-e5ce80f08a3b-cilium-config-path\") pod \"cilium-operator-599987898-7q85s\" (UID: \"ee8ef9a6-fa83-4916-a5c4-e5ce80f08a3b\") " pod="kube-system/cilium-operator-599987898-7q85s" May 8 00:37:52.273855 kubelet[2602]: I0508 00:37:52.273697 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg5pk\" (UniqueName: \"kubernetes.io/projected/ee8ef9a6-fa83-4916-a5c4-e5ce80f08a3b-kube-api-access-sg5pk\") pod \"cilium-operator-599987898-7q85s\" (UID: \"ee8ef9a6-fa83-4916-a5c4-e5ce80f08a3b\") " pod="kube-system/cilium-operator-599987898-7q85s" May 8 00:37:52.289770 containerd[1467]: time="2025-05-08T00:37:52.289720692Z" level=info msg="CreateContainer within sandbox \"08cff6fad5edcdcc92c5f5f56c36ab9dbb32b3769450ed5ea01725593f0e4f8f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"721b8c3ef490bc9f83cb966f05b860ef30c602ad5973b40e3f2c8efe4191e5ee\"" May 8 00:37:52.290810 containerd[1467]: time="2025-05-08T00:37:52.290763374Z" level=info msg="StartContainer for \"721b8c3ef490bc9f83cb966f05b860ef30c602ad5973b40e3f2c8efe4191e5ee\"" May 8 00:37:52.319735 systemd[1]: Started cri-containerd-721b8c3ef490bc9f83cb966f05b860ef30c602ad5973b40e3f2c8efe4191e5ee.scope - libcontainer container 721b8c3ef490bc9f83cb966f05b860ef30c602ad5973b40e3f2c8efe4191e5ee. May 8 00:37:52.349959 containerd[1467]: time="2025-05-08T00:37:52.349839908Z" level=info msg="StartContainer for \"721b8c3ef490bc9f83cb966f05b860ef30c602ad5973b40e3f2c8efe4191e5ee\" returns successfully" May 8 00:37:52.536111 kubelet[2602]: E0508 00:37:52.536001 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:52.536648 containerd[1467]: time="2025-05-08T00:37:52.536396386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-7q85s,Uid:ee8ef9a6-fa83-4916-a5c4-e5ce80f08a3b,Namespace:kube-system,Attempt:0,}" May 8 00:37:52.912150 containerd[1467]: time="2025-05-08T00:37:52.911971063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:37:52.912150 containerd[1467]: time="2025-05-08T00:37:52.912094636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:37:52.912150 containerd[1467]: time="2025-05-08T00:37:52.912113431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:52.913253 containerd[1467]: time="2025-05-08T00:37:52.913074229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:37:52.929643 kubelet[2602]: E0508 00:37:52.929585 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:37:52.939772 systemd[1]: Started cri-containerd-6e2f9d525c3c53217e7adb2ca54999e202f1251a5f2f4d8b3bd5ca6092933570.scope - libcontainer container 6e2f9d525c3c53217e7adb2ca54999e202f1251a5f2f4d8b3bd5ca6092933570. May 8 00:37:52.942110 kubelet[2602]: I0508 00:37:52.941890 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hvjn4" podStartSLOduration=1.941868576 podStartE2EDuration="1.941868576s" podCreationTimestamp="2025-05-08 00:37:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:37:52.941508148 +0000 UTC m=+14.119078572" watchObservedRunningTime="2025-05-08 00:37:52.941868576 +0000 UTC m=+14.119439010" May 8 00:37:52.984462 containerd[1467]: time="2025-05-08T00:37:52.984038278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-7q85s,Uid:ee8ef9a6-fa83-4916-a5c4-e5ce80f08a3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e2f9d525c3c53217e7adb2ca54999e202f1251a5f2f4d8b3bd5ca6092933570\"" May 8 00:37:52.984753 kubelet[2602]: E0508 00:37:52.984717 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:38:00.732049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount916192286.mount: Deactivated successfully. May 8 00:38:02.786296 containerd[1467]: time="2025-05-08T00:38:02.786227814Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:38:02.787391 containerd[1467]: time="2025-05-08T00:38:02.787345354Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 8 00:38:02.788994 containerd[1467]: time="2025-05-08T00:38:02.788959366Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:38:02.790617 containerd[1467]: time="2025-05-08T00:38:02.790555165Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.528466542s" May 8 00:38:02.790617 containerd[1467]: time="2025-05-08T00:38:02.790606692Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 8 00:38:02.791708 containerd[1467]: time="2025-05-08T00:38:02.791681772Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 8 00:38:02.799513 containerd[1467]: time="2025-05-08T00:38:02.799475517Z" level=info msg="CreateContainer within sandbox \"0d09dc6f89fae5588e6e0526480c9eadfa8324e52aa6355a5c70f12a172dda6e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:38:02.817654 containerd[1467]: time="2025-05-08T00:38:02.817611557Z" level=info msg="CreateContainer within sandbox \"0d09dc6f89fae5588e6e0526480c9eadfa8324e52aa6355a5c70f12a172dda6e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1828a420b43a75a8ccf62f98166a506680f8376983f07838aeac00a35692e091\"" May 8 00:38:02.818172 containerd[1467]: time="2025-05-08T00:38:02.817978577Z" level=info msg="StartContainer for \"1828a420b43a75a8ccf62f98166a506680f8376983f07838aeac00a35692e091\"" May 8 00:38:02.855716 systemd[1]: Started cri-containerd-1828a420b43a75a8ccf62f98166a506680f8376983f07838aeac00a35692e091.scope - libcontainer container 1828a420b43a75a8ccf62f98166a506680f8376983f07838aeac00a35692e091. May 8 00:38:02.884840 containerd[1467]: time="2025-05-08T00:38:02.884791817Z" level=info msg="StartContainer for \"1828a420b43a75a8ccf62f98166a506680f8376983f07838aeac00a35692e091\" returns successfully" May 8 00:38:02.896876 systemd[1]: cri-containerd-1828a420b43a75a8ccf62f98166a506680f8376983f07838aeac00a35692e091.scope: Deactivated successfully. May 8 00:38:03.020448 kubelet[2602]: E0508 00:38:03.020415 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:38:03.811814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1828a420b43a75a8ccf62f98166a506680f8376983f07838aeac00a35692e091-rootfs.mount: Deactivated successfully. May 8 00:38:03.818287 containerd[1467]: time="2025-05-08T00:38:03.816094490Z" level=info msg="shim disconnected" id=1828a420b43a75a8ccf62f98166a506680f8376983f07838aeac00a35692e091 namespace=k8s.io May 8 00:38:03.818287 containerd[1467]: time="2025-05-08T00:38:03.818281860Z" level=warning msg="cleaning up after shim disconnected" id=1828a420b43a75a8ccf62f98166a506680f8376983f07838aeac00a35692e091 namespace=k8s.io May 8 00:38:03.818656 containerd[1467]: time="2025-05-08T00:38:03.818291688Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:38:04.022954 kubelet[2602]: E0508 00:38:04.022921 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:38:04.024568 containerd[1467]: time="2025-05-08T00:38:04.024531398Z" level=info msg="CreateContainer within sandbox \"0d09dc6f89fae5588e6e0526480c9eadfa8324e52aa6355a5c70f12a172dda6e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:38:04.195071 containerd[1467]: time="2025-05-08T00:38:04.195023843Z" level=info msg="CreateContainer within sandbox \"0d09dc6f89fae5588e6e0526480c9eadfa8324e52aa6355a5c70f12a172dda6e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dc27adfb6902191e3e76e9ebcaa00e3440f5c2bbbd6e47d2ce6dbbfd75b7e386\"" May 8 00:38:04.196110 containerd[1467]: time="2025-05-08T00:38:04.196078324Z" level=info msg="StartContainer for \"dc27adfb6902191e3e76e9ebcaa00e3440f5c2bbbd6e47d2ce6dbbfd75b7e386\"" May 8 00:38:04.223736 systemd[1]: Started cri-containerd-dc27adfb6902191e3e76e9ebcaa00e3440f5c2bbbd6e47d2ce6dbbfd75b7e386.scope - libcontainer container dc27adfb6902191e3e76e9ebcaa00e3440f5c2bbbd6e47d2ce6dbbfd75b7e386. May 8 00:38:04.272488 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:38:04.272830 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:38:04.272911 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 8 00:38:04.276992 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:38:04.277246 systemd[1]: cri-containerd-dc27adfb6902191e3e76e9ebcaa00e3440f5c2bbbd6e47d2ce6dbbfd75b7e386.scope: Deactivated successfully. May 8 00:38:04.373031 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:38:04.448549 containerd[1467]: time="2025-05-08T00:38:04.448424658Z" level=info msg="StartContainer for \"dc27adfb6902191e3e76e9ebcaa00e3440f5c2bbbd6e47d2ce6dbbfd75b7e386\" returns successfully" May 8 00:38:04.499131 systemd[1]: Started sshd@7-10.0.0.50:22-10.0.0.1:41422.service - OpenSSH per-connection server daemon (10.0.0.1:41422). May 8 00:38:04.561698 sshd[3108]: Accepted publickey for core from 10.0.0.1 port 41422 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:38:04.563459 sshd[3108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:04.567766 systemd-logind[1452]: New session 8 of user core. May 8 00:38:04.577784 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 00:38:04.592960 containerd[1467]: time="2025-05-08T00:38:04.592898001Z" level=info msg="shim disconnected" id=dc27adfb6902191e3e76e9ebcaa00e3440f5c2bbbd6e47d2ce6dbbfd75b7e386 namespace=k8s.io May 8 00:38:04.592960 containerd[1467]: time="2025-05-08T00:38:04.592953264Z" level=warning msg="cleaning up after shim disconnected" id=dc27adfb6902191e3e76e9ebcaa00e3440f5c2bbbd6e47d2ce6dbbfd75b7e386 namespace=k8s.io May 8 00:38:04.593108 containerd[1467]: time="2025-05-08T00:38:04.592967200Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:38:04.715389 sshd[3108]: pam_unix(sshd:session): session closed for user core May 8 00:38:04.720016 systemd[1]: sshd@7-10.0.0.50:22-10.0.0.1:41422.service: Deactivated successfully. May 8 00:38:04.721967 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:38:04.722636 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. May 8 00:38:04.723546 systemd-logind[1452]: Removed session 8. May 8 00:38:04.812021 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc27adfb6902191e3e76e9ebcaa00e3440f5c2bbbd6e47d2ce6dbbfd75b7e386-rootfs.mount: Deactivated successfully. May 8 00:38:05.026679 kubelet[2602]: E0508 00:38:05.026118 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:38:05.028119 containerd[1467]: time="2025-05-08T00:38:05.027987510Z" level=info msg="CreateContainer within sandbox \"0d09dc6f89fae5588e6e0526480c9eadfa8324e52aa6355a5c70f12a172dda6e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:38:05.107606 containerd[1467]: time="2025-05-08T00:38:05.107540903Z" level=info msg="CreateContainer within sandbox \"0d09dc6f89fae5588e6e0526480c9eadfa8324e52aa6355a5c70f12a172dda6e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e78673efe73d07700edb8306da6ee3110567430ff8da681300031957c63f8f89\"" May 8 00:38:05.108190 containerd[1467]: time="2025-05-08T00:38:05.108145739Z" level=info msg="StartContainer for \"e78673efe73d07700edb8306da6ee3110567430ff8da681300031957c63f8f89\"" May 8 00:38:05.139758 systemd[1]: Started cri-containerd-e78673efe73d07700edb8306da6ee3110567430ff8da681300031957c63f8f89.scope - libcontainer container e78673efe73d07700edb8306da6ee3110567430ff8da681300031957c63f8f89. May 8 00:38:05.190058 systemd[1]: cri-containerd-e78673efe73d07700edb8306da6ee3110567430ff8da681300031957c63f8f89.scope: Deactivated successfully. May 8 00:38:05.279149 containerd[1467]: time="2025-05-08T00:38:05.279046279Z" level=info msg="StartContainer for \"e78673efe73d07700edb8306da6ee3110567430ff8da681300031957c63f8f89\" returns successfully" May 8 00:38:05.341609 containerd[1467]: time="2025-05-08T00:38:05.341546065Z" level=info msg="shim disconnected" id=e78673efe73d07700edb8306da6ee3110567430ff8da681300031957c63f8f89 namespace=k8s.io May 8 00:38:05.341609 containerd[1467]: time="2025-05-08T00:38:05.341606710Z" level=warning msg="cleaning up after shim disconnected" id=e78673efe73d07700edb8306da6ee3110567430ff8da681300031957c63f8f89 namespace=k8s.io May 8 00:38:05.341609 containerd[1467]: time="2025-05-08T00:38:05.341615136Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:38:05.811871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e78673efe73d07700edb8306da6ee3110567430ff8da681300031957c63f8f89-rootfs.mount: Deactivated successfully. May 8 00:38:05.997749 containerd[1467]: time="2025-05-08T00:38:05.997695114Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:38:06.005769 containerd[1467]: time="2025-05-08T00:38:06.005726310Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 8 00:38:06.010850 containerd[1467]: time="2025-05-08T00:38:06.010811922Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:38:06.012100 containerd[1467]: time="2025-05-08T00:38:06.012063703Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.22034975s" May 8 00:38:06.012100 containerd[1467]: time="2025-05-08T00:38:06.012100923Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 8 00:38:06.014145 containerd[1467]: time="2025-05-08T00:38:06.014111070Z" level=info msg="CreateContainer within sandbox \"6e2f9d525c3c53217e7adb2ca54999e202f1251a5f2f4d8b3bd5ca6092933570\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 00:38:06.028834 kubelet[2602]: E0508 00:38:06.028798 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:38:06.031406 containerd[1467]: time="2025-05-08T00:38:06.031355851Z" level=info msg="CreateContainer within sandbox \"0d09dc6f89fae5588e6e0526480c9eadfa8324e52aa6355a5c70f12a172dda6e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:38:06.252140 containerd[1467]: time="2025-05-08T00:38:06.252096408Z" level=info msg="CreateContainer within sandbox \"6e2f9d525c3c53217e7adb2ca54999e202f1251a5f2f4d8b3bd5ca6092933570\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"575c0dc2ca3abc933027f3598ff6453eba24348ce01c117d543b2e6335cd6055\"" May 8 00:38:06.253228 containerd[1467]: time="2025-05-08T00:38:06.252856866Z" level=info msg="StartContainer for \"575c0dc2ca3abc933027f3598ff6453eba24348ce01c117d543b2e6335cd6055\"" May 8 00:38:06.284726 systemd[1]: Started cri-containerd-575c0dc2ca3abc933027f3598ff6453eba24348ce01c117d543b2e6335cd6055.scope - libcontainer container 575c0dc2ca3abc933027f3598ff6453eba24348ce01c117d543b2e6335cd6055. May 8 00:38:06.469227 containerd[1467]: time="2025-05-08T00:38:06.469175957Z" level=info msg="StartContainer for \"575c0dc2ca3abc933027f3598ff6453eba24348ce01c117d543b2e6335cd6055\" returns successfully" May 8 00:38:06.503145 containerd[1467]: time="2025-05-08T00:38:06.502857924Z" level=info msg="CreateContainer within sandbox \"0d09dc6f89fae5588e6e0526480c9eadfa8324e52aa6355a5c70f12a172dda6e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c91b87b281911509265ccd06c6df70a583358f8d3e8fe09e9a268b7a5bf708d5\"" May 8 00:38:06.503811 containerd[1467]: time="2025-05-08T00:38:06.503782470Z" level=info msg="StartContainer for \"c91b87b281911509265ccd06c6df70a583358f8d3e8fe09e9a268b7a5bf708d5\"" May 8 00:38:06.557882 systemd[1]: Started cri-containerd-c91b87b281911509265ccd06c6df70a583358f8d3e8fe09e9a268b7a5bf708d5.scope - libcontainer container c91b87b281911509265ccd06c6df70a583358f8d3e8fe09e9a268b7a5bf708d5. May 8 00:38:06.585662 systemd[1]: cri-containerd-c91b87b281911509265ccd06c6df70a583358f8d3e8fe09e9a268b7a5bf708d5.scope: Deactivated successfully. May 8 00:38:06.606030 containerd[1467]: time="2025-05-08T00:38:06.605913827Z" level=info msg="StartContainer for \"c91b87b281911509265ccd06c6df70a583358f8d3e8fe09e9a268b7a5bf708d5\" returns successfully" May 8 00:38:06.646369 containerd[1467]: time="2025-05-08T00:38:06.646287213Z" level=info msg="shim disconnected" id=c91b87b281911509265ccd06c6df70a583358f8d3e8fe09e9a268b7a5bf708d5 namespace=k8s.io May 8 00:38:06.646369 containerd[1467]: time="2025-05-08T00:38:06.646354499Z" level=warning msg="cleaning up after shim disconnected" id=c91b87b281911509265ccd06c6df70a583358f8d3e8fe09e9a268b7a5bf708d5 namespace=k8s.io May 8 00:38:06.646369 containerd[1467]: time="2025-05-08T00:38:06.646366642Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:38:07.043944 kubelet[2602]: E0508 00:38:07.043894 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:38:07.046067 kubelet[2602]: E0508 00:38:07.045962 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:38:07.051729 containerd[1467]: time="2025-05-08T00:38:07.051689709Z" level=info msg="CreateContainer within sandbox \"0d09dc6f89fae5588e6e0526480c9eadfa8324e52aa6355a5c70f12a172dda6e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:38:07.096696 kubelet[2602]: I0508 00:38:07.096627 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-7q85s" podStartSLOduration=2.069346579 podStartE2EDuration="15.096610779s" podCreationTimestamp="2025-05-08 00:37:52 +0000 UTC" firstStartedPulling="2025-05-08 00:37:52.985524656 +0000 UTC m=+14.163095080" lastFinishedPulling="2025-05-08 00:38:06.012788865 +0000 UTC m=+27.190359280" observedRunningTime="2025-05-08 00:38:07.069985111 +0000 UTC m=+28.247555535" watchObservedRunningTime="2025-05-08 00:38:07.096610779 +0000 UTC m=+28.274181213" May 8 00:38:07.103805 containerd[1467]: time="2025-05-08T00:38:07.103740219Z" level=info msg="CreateContainer within sandbox \"0d09dc6f89fae5588e6e0526480c9eadfa8324e52aa6355a5c70f12a172dda6e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"81836ee7e4069fcaeed312c5636ad42eec9d8648ed902051641e387faaa362bf\"" May 8 00:38:07.107034 containerd[1467]: time="2025-05-08T00:38:07.106984352Z" level=info msg="StartContainer for \"81836ee7e4069fcaeed312c5636ad42eec9d8648ed902051641e387faaa362bf\"" May 8 00:38:07.162851 systemd[1]: Started cri-containerd-81836ee7e4069fcaeed312c5636ad42eec9d8648ed902051641e387faaa362bf.scope - libcontainer container 81836ee7e4069fcaeed312c5636ad42eec9d8648ed902051641e387faaa362bf. May 8 00:38:07.195386 containerd[1467]: time="2025-05-08T00:38:07.195293613Z" level=info msg="StartContainer for \"81836ee7e4069fcaeed312c5636ad42eec9d8648ed902051641e387faaa362bf\" returns successfully" May 8 00:38:07.345679 kubelet[2602]: I0508 00:38:07.345546 2602 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 8 00:38:07.397650 kubelet[2602]: I0508 00:38:07.397567 2602 topology_manager.go:215] "Topology Admit Handler" podUID="271dbe5f-44d1-49b3-be51-201c85d90fd9" podNamespace="kube-system" podName="coredns-7db6d8ff4d-k46j2" May 8 00:38:07.400134 kubelet[2602]: I0508 00:38:07.400099 2602 topology_manager.go:215] "Topology Admit Handler" podUID="63fda893-dbf1-4098-a633-b03ea4a0431f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-p6bkm" May 8 00:38:07.409554 systemd[1]: Created slice kubepods-burstable-pod63fda893_dbf1_4098_a633_b03ea4a0431f.slice - libcontainer container kubepods-burstable-pod63fda893_dbf1_4098_a633_b03ea4a0431f.slice. May 8 00:38:07.422063 systemd[1]: Created slice kubepods-burstable-pod271dbe5f_44d1_49b3_be51_201c85d90fd9.slice - libcontainer container kubepods-burstable-pod271dbe5f_44d1_49b3_be51_201c85d90fd9.slice. May 8 00:38:07.474445 kubelet[2602]: I0508 00:38:07.474336 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/271dbe5f-44d1-49b3-be51-201c85d90fd9-config-volume\") pod \"coredns-7db6d8ff4d-k46j2\" (UID: \"271dbe5f-44d1-49b3-be51-201c85d90fd9\") " pod="kube-system/coredns-7db6d8ff4d-k46j2" May 8 00:38:07.474445 kubelet[2602]: I0508 00:38:07.474408 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63fda893-dbf1-4098-a633-b03ea4a0431f-config-volume\") pod \"coredns-7db6d8ff4d-p6bkm\" (UID: \"63fda893-dbf1-4098-a633-b03ea4a0431f\") " pod="kube-system/coredns-7db6d8ff4d-p6bkm" May 8 00:38:07.474445 kubelet[2602]: I0508 00:38:07.474434 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mjqh\" (UniqueName: \"kubernetes.io/projected/271dbe5f-44d1-49b3-be51-201c85d90fd9-kube-api-access-2mjqh\") pod \"coredns-7db6d8ff4d-k46j2\" (UID: \"271dbe5f-44d1-49b3-be51-201c85d90fd9\") " pod="kube-system/coredns-7db6d8ff4d-k46j2" May 8 00:38:07.474725 kubelet[2602]: I0508 00:38:07.474484 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df5kh\" (UniqueName: \"kubernetes.io/projected/63fda893-dbf1-4098-a633-b03ea4a0431f-kube-api-access-df5kh\") pod \"coredns-7db6d8ff4d-p6bkm\" (UID: \"63fda893-dbf1-4098-a633-b03ea4a0431f\") " pod="kube-system/coredns-7db6d8ff4d-p6bkm" May 8 00:38:07.713194 kubelet[2602]: E0508 00:38:07.713134 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:38:07.713945 containerd[1467]: time="2025-05-08T00:38:07.713892144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-p6bkm,Uid:63fda893-dbf1-4098-a633-b03ea4a0431f,Namespace:kube-system,Attempt:0,}" May 8 00:38:07.728146 kubelet[2602]: E0508 00:38:07.728090 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:38:07.728710 containerd[1467]: time="2025-05-08T00:38:07.728673736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k46j2,Uid:271dbe5f-44d1-49b3-be51-201c85d90fd9,Namespace:kube-system,Attempt:0,}" May 8 00:38:08.051719 kubelet[2602]: E0508 00:38:08.051563 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:38:08.052239 kubelet[2602]: E0508 00:38:08.051733 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:38:08.065880 kubelet[2602]: I0508 00:38:08.065780 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jgkw9" podStartSLOduration=6.534518921 podStartE2EDuration="17.065762488s" podCreationTimestamp="2025-05-08 00:37:51 +0000 UTC" firstStartedPulling="2025-05-08 00:37:52.260200589 +0000 UTC m=+13.437771013" lastFinishedPulling="2025-05-08 00:38:02.791444156 +0000 UTC m=+23.969014580" observedRunningTime="2025-05-08 00:38:08.065656359 +0000 UTC m=+29.243226793" watchObservedRunningTime="2025-05-08 00:38:08.065762488 +0000 UTC m=+29.243332912" May 8 00:38:09.053615 kubelet[2602]: E0508 00:38:09.053550 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:38:09.730415 systemd[1]: Started sshd@8-10.0.0.50:22-10.0.0.1:39574.service - OpenSSH per-connection server daemon (10.0.0.1:39574). May 8 00:38:09.768563 sshd[3444]: Accepted publickey for core from 10.0.0.1 port 39574 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:38:09.770616 sshd[3444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:09.774999 systemd-logind[1452]: New session 9 of user core. May 8 00:38:09.785755 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 00:38:09.899711 sshd[3444]: pam_unix(sshd:session): session closed for user core May 8 00:38:09.904339 systemd[1]: sshd@8-10.0.0.50:22-10.0.0.1:39574.service: Deactivated successfully. May 8 00:38:09.906544 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:38:09.907185 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. May 8 00:38:09.908092 systemd-logind[1452]: Removed session 9. May 8 00:38:10.055352 kubelet[2602]: E0508 00:38:10.055232 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:38:10.289542 systemd-networkd[1403]: cilium_host: Link UP May 8 00:38:10.289730 systemd-networkd[1403]: cilium_net: Link UP May 8 00:38:10.289908 systemd-networkd[1403]: cilium_net: Gained carrier May 8 00:38:10.290090 systemd-networkd[1403]: cilium_host: Gained carrier May 8 00:38:10.398441 systemd-networkd[1403]: cilium_vxlan: Link UP May 8 00:38:10.398453 systemd-networkd[1403]: cilium_vxlan: Gained carrier May 8 00:38:10.448788 systemd-networkd[1403]: cilium_host: Gained IPv6LL May 8 00:38:10.616628 kernel: NET: Registered PF_ALG protocol family May 8 00:38:10.880878 systemd-networkd[1403]: cilium_net: Gained IPv6LL May 8 00:38:11.298430 systemd-networkd[1403]: lxc_health: Link UP May 8 00:38:11.307839 systemd-networkd[1403]: lxc_health: Gained carrier May 8 00:38:11.777734 systemd-networkd[1403]: cilium_vxlan: Gained IPv6LL May 8 00:38:11.821111 systemd-networkd[1403]: lxc4655ab4e0178: Link UP May 8 00:38:11.828622 kernel: eth0: renamed from tmp7c43c May 8 00:38:11.834323 systemd-networkd[1403]: lxc4655ab4e0178: Gained carrier May 8 00:38:11.853222 systemd-networkd[1403]: lxc6f062a9ee4f2: Link UP May 8 00:38:11.871632 kernel: eth0: renamed from tmpd966f May 8 00:38:11.883850 systemd-networkd[1403]: lxc6f062a9ee4f2: Gained carrier May 8 00:38:12.168341 kubelet[2602]: E0508 00:38:12.168306 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:38:12.993738 systemd-networkd[1403]: lxc6f062a9ee4f2: Gained IPv6LL May 8 00:38:13.056661 systemd-networkd[1403]: lxc4655ab4e0178: Gained IPv6LL May 8 00:38:13.061413 kubelet[2602]: E0508 00:38:13.061395 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:38:13.184693 systemd-networkd[1403]: lxc_health: Gained IPv6LL May 8 00:38:14.912634 systemd[1]: Started sshd@9-10.0.0.50:22-10.0.0.1:39580.service - OpenSSH per-connection server daemon (10.0.0.1:39580). May 8 00:38:14.947713 sshd[3842]: Accepted publickey for core from 10.0.0.1 port 39580 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:38:14.949174 sshd[3842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:14.953147 systemd-logind[1452]: New session 10 of user core. May 8 00:38:14.959127 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 00:38:15.112941 sshd[3842]: pam_unix(sshd:session): session closed for user core May 8 00:38:15.116193 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. May 8 00:38:15.119402 systemd[1]: sshd@9-10.0.0.50:22-10.0.0.1:39580.service: Deactivated successfully. May 8 00:38:15.122356 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:38:15.124438 systemd-logind[1452]: Removed session 10. May 8 00:38:15.306045 containerd[1467]: time="2025-05-08T00:38:15.305856521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:38:15.306045 containerd[1467]: time="2025-05-08T00:38:15.305936881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:38:15.306668 containerd[1467]: time="2025-05-08T00:38:15.306058990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:38:15.307687 containerd[1467]: time="2025-05-08T00:38:15.307626844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:38:15.315904 containerd[1467]: time="2025-05-08T00:38:15.315583039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:38:15.315904 containerd[1467]: time="2025-05-08T00:38:15.315646267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:38:15.315904 containerd[1467]: time="2025-05-08T00:38:15.315657508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:38:15.315904 containerd[1467]: time="2025-05-08T00:38:15.315755132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:38:15.331727 systemd[1]: Started cri-containerd-7c43cd792a78a7036663ace7734b3e52c5062695e6c8ee6107f4784b9509bbe5.scope - libcontainer container 7c43cd792a78a7036663ace7734b3e52c5062695e6c8ee6107f4784b9509bbe5. May 8 00:38:15.336385 systemd[1]: Started cri-containerd-d966ffeadd751b2f5b8a09c4444a022b57267995496947d0547646524eb60d68.scope - libcontainer container d966ffeadd751b2f5b8a09c4444a022b57267995496947d0547646524eb60d68. May 8 00:38:15.345217 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:38:15.348312 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:38:15.373153 containerd[1467]: time="2025-05-08T00:38:15.373116750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-p6bkm,Uid:63fda893-dbf1-4098-a633-b03ea4a0431f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c43cd792a78a7036663ace7734b3e52c5062695e6c8ee6107f4784b9509bbe5\"" May 8 00:38:15.373713 kubelet[2602]: E0508 00:38:15.373694 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:38:15.375977 containerd[1467]: time="2025-05-08T00:38:15.375888513Z" level=info msg="CreateContainer within sandbox \"7c43cd792a78a7036663ace7734b3e52c5062695e6c8ee6107f4784b9509bbe5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:38:15.376175 containerd[1467]: time="2025-05-08T00:38:15.376129747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k46j2,Uid:271dbe5f-44d1-49b3-be51-201c85d90fd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"d966ffeadd751b2f5b8a09c4444a022b57267995496947d0547646524eb60d68\"" May 8 00:38:15.376729 kubelet[2602]: E0508 00:38:15.376709 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:38:15.378911 containerd[1467]: time="2025-05-08T00:38:15.378861815Z" level=info msg="CreateContainer within sandbox \"d966ffeadd751b2f5b8a09c4444a022b57267995496947d0547646524eb60d68\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:38:15.653339 containerd[1467]: time="2025-05-08T00:38:15.653271212Z" level=info msg="CreateContainer within sandbox \"7c43cd792a78a7036663ace7734b3e52c5062695e6c8ee6107f4784b9509bbe5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1ec2afa7e5a11ef32b2ca736cb356da4b1f047bb4f39f9e4afc3d748277e2c3a\"" May 8 00:38:15.654157 containerd[1467]: time="2025-05-08T00:38:15.653992055Z" level=info msg="StartContainer for \"1ec2afa7e5a11ef32b2ca736cb356da4b1f047bb4f39f9e4afc3d748277e2c3a\"" May 8 00:38:15.660849 containerd[1467]: time="2025-05-08T00:38:15.660801207Z" level=info msg="CreateContainer within sandbox \"d966ffeadd751b2f5b8a09c4444a022b57267995496947d0547646524eb60d68\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4ba78feb84e47aca12ca3ef3a75b8e3128495ac6d9d4b4ad8d51ce1ddc05b436\"" May 8 00:38:15.661493 containerd[1467]: time="2025-05-08T00:38:15.661401304Z" level=info msg="StartContainer for \"4ba78feb84e47aca12ca3ef3a75b8e3128495ac6d9d4b4ad8d51ce1ddc05b436\"" May 8 00:38:15.684762 systemd[1]: Started cri-containerd-1ec2afa7e5a11ef32b2ca736cb356da4b1f047bb4f39f9e4afc3d748277e2c3a.scope - libcontainer container 1ec2afa7e5a11ef32b2ca736cb356da4b1f047bb4f39f9e4afc3d748277e2c3a. May 8 00:38:15.688166 systemd[1]: Started cri-containerd-4ba78feb84e47aca12ca3ef3a75b8e3128495ac6d9d4b4ad8d51ce1ddc05b436.scope - libcontainer container 4ba78feb84e47aca12ca3ef3a75b8e3128495ac6d9d4b4ad8d51ce1ddc05b436. May 8 00:38:15.722214 containerd[1467]: time="2025-05-08T00:38:15.722166190Z" level=info msg="StartContainer for \"4ba78feb84e47aca12ca3ef3a75b8e3128495ac6d9d4b4ad8d51ce1ddc05b436\" returns successfully" May 8 00:38:15.722353 containerd[1467]: time="2025-05-08T00:38:15.722161071Z" level=info msg="StartContainer for \"1ec2afa7e5a11ef32b2ca736cb356da4b1f047bb4f39f9e4afc3d748277e2c3a\" returns successfully" May 8 00:38:16.069966 kubelet[2602]: E0508 00:38:16.069821 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:38:16.071195 kubelet[2602]: E0508 00:38:16.071154 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:38:16.089696 kubelet[2602]: I0508 00:38:16.089394 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-p6bkm" podStartSLOduration=24.089372567 podStartE2EDuration="24.089372567s" podCreationTimestamp="2025-05-08 00:37:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:38:16.080569282 +0000 UTC m=+37.258139706" watchObservedRunningTime="2025-05-08 00:38:16.089372567 +0000 UTC m=+37.266942991" May 8 00:38:16.104640 kubelet[2602]: I0508 00:38:16.104543 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-k46j2" podStartSLOduration=24.104519629 podStartE2EDuration="24.104519629s" podCreationTimestamp="2025-05-08 00:37:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:38:16.103959137 +0000 UTC m=+37.281529561" watchObservedRunningTime="2025-05-08 00:38:16.104519629 +0000 UTC m=+37.282090053" May 8 00:38:17.073376 kubelet[2602]: E0508 00:38:17.073343 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:38:17.074022 kubelet[2602]: E0508 00:38:17.073485 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:38:18.075046 kubelet[2602]: E0508 00:38:18.074932 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:38:20.123552 systemd[1]: Started sshd@10-10.0.0.50:22-10.0.0.1:41878.service - OpenSSH per-connection server daemon (10.0.0.1:41878). May 8 00:38:20.159362 sshd[4033]: Accepted publickey for core from 10.0.0.1 port 41878 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:38:20.160892 sshd[4033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:20.164695 systemd-logind[1452]: New session 11 of user core. May 8 00:38:20.171700 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 00:38:20.279271 sshd[4033]: pam_unix(sshd:session): session closed for user core May 8 00:38:20.289692 systemd[1]: sshd@10-10.0.0.50:22-10.0.0.1:41878.service: Deactivated successfully. May 8 00:38:20.291580 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:38:20.293258 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. May 8 00:38:20.301899 systemd[1]: Started sshd@11-10.0.0.50:22-10.0.0.1:41884.service - OpenSSH per-connection server daemon (10.0.0.1:41884). May 8 00:38:20.302887 systemd-logind[1452]: Removed session 11. May 8 00:38:20.330856 sshd[4048]: Accepted publickey for core from 10.0.0.1 port 41884 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:38:20.332447 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:20.336199 systemd-logind[1452]: New session 12 of user core. May 8 00:38:20.342693 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 00:38:20.481941 sshd[4048]: pam_unix(sshd:session): session closed for user core May 8 00:38:20.497444 systemd[1]: sshd@11-10.0.0.50:22-10.0.0.1:41884.service: Deactivated successfully. May 8 00:38:20.499256 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:38:20.501099 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. May 8 00:38:20.513931 systemd[1]: Started sshd@12-10.0.0.50:22-10.0.0.1:41890.service - OpenSSH per-connection server daemon (10.0.0.1:41890). May 8 00:38:20.514926 systemd-logind[1452]: Removed session 12. May 8 00:38:20.545113 sshd[4060]: Accepted publickey for core from 10.0.0.1 port 41890 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:38:20.546767 sshd[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:20.550901 systemd-logind[1452]: New session 13 of user core. May 8 00:38:20.558722 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 00:38:20.668879 sshd[4060]: pam_unix(sshd:session): session closed for user core May 8 00:38:20.672583 systemd[1]: sshd@12-10.0.0.50:22-10.0.0.1:41890.service: Deactivated successfully. May 8 00:38:20.674531 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:38:20.675220 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. May 8 00:38:20.676099 systemd-logind[1452]: Removed session 13. May 8 00:38:25.680787 systemd[1]: Started sshd@13-10.0.0.50:22-10.0.0.1:41906.service - OpenSSH per-connection server daemon (10.0.0.1:41906). May 8 00:38:25.713723 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 41906 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:38:25.715525 sshd[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:25.719836 systemd-logind[1452]: New session 14 of user core. May 8 00:38:25.731750 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 00:38:25.836066 sshd[4076]: pam_unix(sshd:session): session closed for user core May 8 00:38:25.839745 systemd[1]: sshd@13-10.0.0.50:22-10.0.0.1:41906.service: Deactivated successfully. May 8 00:38:25.841871 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:38:25.842697 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. May 8 00:38:25.843643 systemd-logind[1452]: Removed session 14. May 8 00:38:30.851415 systemd[1]: Started sshd@14-10.0.0.50:22-10.0.0.1:60546.service - OpenSSH per-connection server daemon (10.0.0.1:60546). May 8 00:38:30.884632 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 60546 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:38:30.886036 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:30.889793 systemd-logind[1452]: New session 15 of user core. May 8 00:38:30.902723 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 00:38:31.009801 sshd[4090]: pam_unix(sshd:session): session closed for user core May 8 00:38:31.013374 systemd[1]: sshd@14-10.0.0.50:22-10.0.0.1:60546.service: Deactivated successfully. May 8 00:38:31.015563 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:38:31.016238 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. May 8 00:38:31.017126 systemd-logind[1452]: Removed session 15. May 8 00:38:36.020532 systemd[1]: Started sshd@15-10.0.0.50:22-10.0.0.1:60548.service - OpenSSH per-connection server daemon (10.0.0.1:60548). May 8 00:38:36.055883 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 60548 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:38:36.057235 sshd[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:36.060755 systemd-logind[1452]: New session 16 of user core. May 8 00:38:36.068711 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 00:38:36.181430 sshd[4105]: pam_unix(sshd:session): session closed for user core May 8 00:38:36.194496 systemd[1]: sshd@15-10.0.0.50:22-10.0.0.1:60548.service: Deactivated successfully. May 8 00:38:36.196303 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:38:36.197986 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. May 8 00:38:36.203836 systemd[1]: Started sshd@16-10.0.0.50:22-10.0.0.1:60554.service - OpenSSH per-connection server daemon (10.0.0.1:60554). May 8 00:38:36.204782 systemd-logind[1452]: Removed session 16. May 8 00:38:36.231390 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 60554 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:38:36.232984 sshd[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:36.236858 systemd-logind[1452]: New session 17 of user core. May 8 00:38:36.246711 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 00:38:36.477032 sshd[4120]: pam_unix(sshd:session): session closed for user core May 8 00:38:36.485611 systemd[1]: sshd@16-10.0.0.50:22-10.0.0.1:60554.service: Deactivated successfully. May 8 00:38:36.487509 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:38:36.489186 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. May 8 00:38:36.494849 systemd[1]: Started sshd@17-10.0.0.50:22-10.0.0.1:60558.service - OpenSSH per-connection server daemon (10.0.0.1:60558). May 8 00:38:36.495871 systemd-logind[1452]: Removed session 17. May 8 00:38:36.527568 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 60558 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:38:36.529180 sshd[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:36.533331 systemd-logind[1452]: New session 18 of user core. May 8 00:38:36.543715 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 00:38:38.516537 sshd[4132]: pam_unix(sshd:session): session closed for user core May 8 00:38:38.525799 systemd[1]: sshd@17-10.0.0.50:22-10.0.0.1:60558.service: Deactivated successfully. May 8 00:38:38.529291 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:38:38.531775 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. May 8 00:38:38.538125 systemd[1]: Started sshd@18-10.0.0.50:22-10.0.0.1:35756.service - OpenSSH per-connection server daemon (10.0.0.1:35756). May 8 00:38:38.540904 systemd-logind[1452]: Removed session 18. May 8 00:38:38.572820 sshd[4152]: Accepted publickey for core from 10.0.0.1 port 35756 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:38:38.574539 sshd[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:38.578363 systemd-logind[1452]: New session 19 of user core. May 8 00:38:38.585728 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 00:38:38.870834 sshd[4152]: pam_unix(sshd:session): session closed for user core May 8 00:38:38.882269 systemd[1]: sshd@18-10.0.0.50:22-10.0.0.1:35756.service: Deactivated successfully. May 8 00:38:38.884896 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:38:38.886681 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. May 8 00:38:38.898079 systemd[1]: Started sshd@19-10.0.0.50:22-10.0.0.1:35772.service - OpenSSH per-connection server daemon (10.0.0.1:35772). May 8 00:38:38.899646 systemd-logind[1452]: Removed session 19. May 8 00:38:38.927420 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 35772 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:38:38.929055 sshd[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:38.933418 systemd-logind[1452]: New session 20 of user core. May 8 00:38:38.945724 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 00:38:39.058946 sshd[4164]: pam_unix(sshd:session): session closed for user core May 8 00:38:39.062827 systemd[1]: sshd@19-10.0.0.50:22-10.0.0.1:35772.service: Deactivated successfully. May 8 00:38:39.065030 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:38:39.065803 systemd-logind[1452]: Session 20 logged out. Waiting for processes to exit. May 8 00:38:39.066703 systemd-logind[1452]: Removed session 20. May 8 00:38:44.070881 systemd[1]: Started sshd@20-10.0.0.50:22-10.0.0.1:35786.service - OpenSSH per-connection server daemon (10.0.0.1:35786). May 8 00:38:44.103199 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 35786 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:38:44.104791 sshd[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:44.109009 systemd-logind[1452]: New session 21 of user core. May 8 00:38:44.120752 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 00:38:44.231520 sshd[4181]: pam_unix(sshd:session): session closed for user core May 8 00:38:44.236235 systemd[1]: sshd@20-10.0.0.50:22-10.0.0.1:35786.service: Deactivated successfully. May 8 00:38:44.239137 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:38:44.239872 systemd-logind[1452]: Session 21 logged out. Waiting for processes to exit. May 8 00:38:44.240755 systemd-logind[1452]: Removed session 21. May 8 00:38:49.246272 systemd[1]: Started sshd@21-10.0.0.50:22-10.0.0.1:39836.service - OpenSSH per-connection server daemon (10.0.0.1:39836). May 8 00:38:49.280293 sshd[4198]: Accepted publickey for core from 10.0.0.1 port 39836 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:38:49.282161 sshd[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:49.286489 systemd-logind[1452]: New session 22 of user core. May 8 00:38:49.295761 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 00:38:49.400159 sshd[4198]: pam_unix(sshd:session): session closed for user core May 8 00:38:49.404584 systemd[1]: sshd@21-10.0.0.50:22-10.0.0.1:39836.service: Deactivated successfully. May 8 00:38:49.406732 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:38:49.407438 systemd-logind[1452]: Session 22 logged out. Waiting for processes to exit. May 8 00:38:49.408370 systemd-logind[1452]: Removed session 22. May 8 00:38:52.899284 kubelet[2602]: E0508 00:38:52.899238 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:38:54.411652 systemd[1]: Started sshd@22-10.0.0.50:22-10.0.0.1:39838.service - OpenSSH per-connection server daemon (10.0.0.1:39838). May 8 00:38:54.443512 sshd[4214]: Accepted publickey for core from 10.0.0.1 port 39838 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:38:54.444852 sshd[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:54.448318 systemd-logind[1452]: New session 23 of user core. May 8 00:38:54.458700 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 00:38:54.555947 sshd[4214]: pam_unix(sshd:session): session closed for user core May 8 00:38:54.559522 systemd[1]: sshd@22-10.0.0.50:22-10.0.0.1:39838.service: Deactivated successfully. May 8 00:38:54.561292 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:38:54.561978 systemd-logind[1452]: Session 23 logged out. Waiting for processes to exit. May 8 00:38:54.562859 systemd-logind[1452]: Removed session 23. May 8 00:38:59.567246 systemd[1]: Started sshd@23-10.0.0.50:22-10.0.0.1:39966.service - OpenSSH per-connection server daemon (10.0.0.1:39966). May 8 00:38:59.599979 sshd[4229]: Accepted publickey for core from 10.0.0.1 port 39966 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:38:59.601355 sshd[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:59.605558 systemd-logind[1452]: New session 24 of user core. May 8 00:38:59.614739 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 00:38:59.720846 sshd[4229]: pam_unix(sshd:session): session closed for user core May 8 00:38:59.728783 systemd[1]: sshd@23-10.0.0.50:22-10.0.0.1:39966.service: Deactivated successfully. May 8 00:38:59.731021 systemd[1]: session-24.scope: Deactivated successfully. May 8 00:38:59.732693 systemd-logind[1452]: Session 24 logged out. Waiting for processes to exit. May 8 00:38:59.740926 systemd[1]: Started sshd@24-10.0.0.50:22-10.0.0.1:39982.service - OpenSSH per-connection server daemon (10.0.0.1:39982). May 8 00:38:59.741748 systemd-logind[1452]: Removed session 24. May 8 00:38:59.770839 sshd[4243]: Accepted publickey for core from 10.0.0.1 port 39982 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:38:59.772369 sshd[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:38:59.776185 systemd-logind[1452]: New session 25 of user core. May 8 00:38:59.784704 systemd[1]: Started session-25.scope - Session 25 of User core. May 8 00:38:59.900041 kubelet[2602]: E0508 00:38:59.899894 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:01.630467 containerd[1467]: time="2025-05-08T00:39:01.630396210Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:39:01.631060 containerd[1467]: time="2025-05-08T00:39:01.631017449Z" level=info msg="StopContainer for \"81836ee7e4069fcaeed312c5636ad42eec9d8648ed902051641e387faaa362bf\" with timeout 2 (s)" May 8 00:39:01.631290 containerd[1467]: time="2025-05-08T00:39:01.631256633Z" level=info msg="Stop container \"81836ee7e4069fcaeed312c5636ad42eec9d8648ed902051641e387faaa362bf\" with signal terminated" May 8 00:39:01.638164 systemd-networkd[1403]: lxc_health: Link DOWN May 8 00:39:01.638175 systemd-networkd[1403]: lxc_health: Lost carrier May 8 00:39:01.676054 systemd[1]: cri-containerd-81836ee7e4069fcaeed312c5636ad42eec9d8648ed902051641e387faaa362bf.scope: Deactivated successfully. May 8 00:39:01.676383 systemd[1]: cri-containerd-81836ee7e4069fcaeed312c5636ad42eec9d8648ed902051641e387faaa362bf.scope: Consumed 6.753s CPU time. May 8 00:39:01.696218 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81836ee7e4069fcaeed312c5636ad42eec9d8648ed902051641e387faaa362bf-rootfs.mount: Deactivated successfully. May 8 00:39:01.905654 containerd[1467]: time="2025-05-08T00:39:01.905552774Z" level=info msg="StopContainer for \"575c0dc2ca3abc933027f3598ff6453eba24348ce01c117d543b2e6335cd6055\" with timeout 30 (s)" May 8 00:39:01.906355 containerd[1467]: time="2025-05-08T00:39:01.906301995Z" level=info msg="Stop container \"575c0dc2ca3abc933027f3598ff6453eba24348ce01c117d543b2e6335cd6055\" with signal terminated" May 8 00:39:01.916952 systemd[1]: cri-containerd-575c0dc2ca3abc933027f3598ff6453eba24348ce01c117d543b2e6335cd6055.scope: Deactivated successfully. May 8 00:39:01.936766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-575c0dc2ca3abc933027f3598ff6453eba24348ce01c117d543b2e6335cd6055-rootfs.mount: Deactivated successfully. May 8 00:39:02.008942 containerd[1467]: time="2025-05-08T00:39:02.008870838Z" level=info msg="shim disconnected" id=81836ee7e4069fcaeed312c5636ad42eec9d8648ed902051641e387faaa362bf namespace=k8s.io May 8 00:39:02.008942 containerd[1467]: time="2025-05-08T00:39:02.008936342Z" level=warning msg="cleaning up after shim disconnected" id=81836ee7e4069fcaeed312c5636ad42eec9d8648ed902051641e387faaa362bf namespace=k8s.io May 8 00:39:02.008942 containerd[1467]: time="2025-05-08T00:39:02.008945760Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:39:02.009348 containerd[1467]: time="2025-05-08T00:39:02.009263442Z" level=info msg="shim disconnected" id=575c0dc2ca3abc933027f3598ff6453eba24348ce01c117d543b2e6335cd6055 namespace=k8s.io May 8 00:39:02.009348 containerd[1467]: time="2025-05-08T00:39:02.009323297Z" level=warning msg="cleaning up after shim disconnected" id=575c0dc2ca3abc933027f3598ff6453eba24348ce01c117d543b2e6335cd6055 namespace=k8s.io May 8 00:39:02.009348 containerd[1467]: time="2025-05-08T00:39:02.009332484Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:39:02.081192 containerd[1467]: time="2025-05-08T00:39:02.081136418Z" level=info msg="StopContainer for \"81836ee7e4069fcaeed312c5636ad42eec9d8648ed902051641e387faaa362bf\" returns successfully" May 8 00:39:02.081893 containerd[1467]: time="2025-05-08T00:39:02.081843809Z" level=info msg="StopPodSandbox for \"0d09dc6f89fae5588e6e0526480c9eadfa8324e52aa6355a5c70f12a172dda6e\"" May 8 00:39:02.081946 containerd[1467]: time="2025-05-08T00:39:02.081901639Z" level=info msg="Container to stop \"e78673efe73d07700edb8306da6ee3110567430ff8da681300031957c63f8f89\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:39:02.081946 containerd[1467]: time="2025-05-08T00:39:02.081914142Z" level=info msg="Container to stop \"c91b87b281911509265ccd06c6df70a583358f8d3e8fe09e9a268b7a5bf708d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:39:02.081946 containerd[1467]: time="2025-05-08T00:39:02.081924643Z" level=info msg="Container to stop \"1828a420b43a75a8ccf62f98166a506680f8376983f07838aeac00a35692e091\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:39:02.081946 containerd[1467]: time="2025-05-08T00:39:02.081933850Z" level=info msg="Container to stop \"dc27adfb6902191e3e76e9ebcaa00e3440f5c2bbbd6e47d2ce6dbbfd75b7e386\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:39:02.081946 containerd[1467]: time="2025-05-08T00:39:02.081942917Z" level=info msg="Container to stop \"81836ee7e4069fcaeed312c5636ad42eec9d8648ed902051641e387faaa362bf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:39:02.083954 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d09dc6f89fae5588e6e0526480c9eadfa8324e52aa6355a5c70f12a172dda6e-shm.mount: Deactivated successfully. May 8 00:39:02.087804 systemd[1]: cri-containerd-0d09dc6f89fae5588e6e0526480c9eadfa8324e52aa6355a5c70f12a172dda6e.scope: Deactivated successfully. May 8 00:39:02.107238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d09dc6f89fae5588e6e0526480c9eadfa8324e52aa6355a5c70f12a172dda6e-rootfs.mount: Deactivated successfully. May 8 00:39:02.154666 containerd[1467]: time="2025-05-08T00:39:02.154575382Z" level=info msg="StopContainer for \"575c0dc2ca3abc933027f3598ff6453eba24348ce01c117d543b2e6335cd6055\" returns successfully" May 8 00:39:02.155186 containerd[1467]: time="2025-05-08T00:39:02.155155814Z" level=info msg="StopPodSandbox for \"6e2f9d525c3c53217e7adb2ca54999e202f1251a5f2f4d8b3bd5ca6092933570\"" May 8 00:39:02.155225 containerd[1467]: time="2025-05-08T00:39:02.155200930Z" level=info msg="Container to stop \"575c0dc2ca3abc933027f3598ff6453eba24348ce01c117d543b2e6335cd6055\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:39:02.161676 systemd[1]: cri-containerd-6e2f9d525c3c53217e7adb2ca54999e202f1251a5f2f4d8b3bd5ca6092933570.scope: Deactivated successfully. May 8 00:39:02.334710 containerd[1467]: time="2025-05-08T00:39:02.334523859Z" level=info msg="shim disconnected" id=0d09dc6f89fae5588e6e0526480c9eadfa8324e52aa6355a5c70f12a172dda6e namespace=k8s.io May 8 00:39:02.334710 containerd[1467]: time="2025-05-08T00:39:02.334579945Z" level=warning msg="cleaning up after shim disconnected" id=0d09dc6f89fae5588e6e0526480c9eadfa8324e52aa6355a5c70f12a172dda6e namespace=k8s.io May 8 00:39:02.334710 containerd[1467]: time="2025-05-08T00:39:02.334609562Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:39:02.358205 containerd[1467]: time="2025-05-08T00:39:02.358171229Z" level=info msg="TearDown network for sandbox \"0d09dc6f89fae5588e6e0526480c9eadfa8324e52aa6355a5c70f12a172dda6e\" successfully" May 8 00:39:02.358205 containerd[1467]: time="2025-05-08T00:39:02.358200864Z" level=info msg="StopPodSandbox for \"0d09dc6f89fae5588e6e0526480c9eadfa8324e52aa6355a5c70f12a172dda6e\" returns successfully" May 8 00:39:02.396793 kubelet[2602]: I0508 00:39:02.396760 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-host-proc-sys-kernel\") pod \"7e6b0833-a46b-4079-8f93-52198b99baed\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " May 8 00:39:02.396793 kubelet[2602]: I0508 00:39:02.396792 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-etc-cni-netd\") pod \"7e6b0833-a46b-4079-8f93-52198b99baed\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " May 8 00:39:02.397202 kubelet[2602]: I0508 00:39:02.396815 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ns7cb\" (UniqueName: \"kubernetes.io/projected/7e6b0833-a46b-4079-8f93-52198b99baed-kube-api-access-ns7cb\") pod \"7e6b0833-a46b-4079-8f93-52198b99baed\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " May 8 00:39:02.397202 kubelet[2602]: I0508 00:39:02.396833 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7e6b0833-a46b-4079-8f93-52198b99baed-hubble-tls\") pod \"7e6b0833-a46b-4079-8f93-52198b99baed\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " May 8 00:39:02.397202 kubelet[2602]: I0508 00:39:02.396857 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-cilium-cgroup\") pod \"7e6b0833-a46b-4079-8f93-52198b99baed\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " May 8 00:39:02.397202 kubelet[2602]: I0508 00:39:02.396871 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-lib-modules\") pod \"7e6b0833-a46b-4079-8f93-52198b99baed\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " May 8 00:39:02.397202 kubelet[2602]: I0508 00:39:02.396885 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-xtables-lock\") pod \"7e6b0833-a46b-4079-8f93-52198b99baed\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " May 8 00:39:02.397202 kubelet[2602]: I0508 00:39:02.396901 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7e6b0833-a46b-4079-8f93-52198b99baed-clustermesh-secrets\") pod \"7e6b0833-a46b-4079-8f93-52198b99baed\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " May 8 00:39:02.397403 kubelet[2602]: I0508 00:39:02.396919 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7e6b0833-a46b-4079-8f93-52198b99baed-cilium-config-path\") pod \"7e6b0833-a46b-4079-8f93-52198b99baed\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " May 8 00:39:02.397403 kubelet[2602]: I0508 00:39:02.396933 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-cilium-run\") pod \"7e6b0833-a46b-4079-8f93-52198b99baed\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " May 8 00:39:02.397403 kubelet[2602]: I0508 00:39:02.396947 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-bpf-maps\") pod \"7e6b0833-a46b-4079-8f93-52198b99baed\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " May 8 00:39:02.397403 kubelet[2602]: I0508 00:39:02.396964 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-cni-path\") pod \"7e6b0833-a46b-4079-8f93-52198b99baed\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " May 8 00:39:02.397403 kubelet[2602]: I0508 00:39:02.396978 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-host-proc-sys-net\") pod \"7e6b0833-a46b-4079-8f93-52198b99baed\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " May 8 00:39:02.397403 kubelet[2602]: I0508 00:39:02.396992 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-hostproc\") pod \"7e6b0833-a46b-4079-8f93-52198b99baed\" (UID: \"7e6b0833-a46b-4079-8f93-52198b99baed\") " May 8 00:39:02.397548 kubelet[2602]: I0508 00:39:02.396886 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7e6b0833-a46b-4079-8f93-52198b99baed" (UID: "7e6b0833-a46b-4079-8f93-52198b99baed"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:39:02.397548 kubelet[2602]: I0508 00:39:02.396917 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7e6b0833-a46b-4079-8f93-52198b99baed" (UID: "7e6b0833-a46b-4079-8f93-52198b99baed"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:39:02.397548 kubelet[2602]: I0508 00:39:02.396919 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7e6b0833-a46b-4079-8f93-52198b99baed" (UID: "7e6b0833-a46b-4079-8f93-52198b99baed"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:39:02.397548 kubelet[2602]: I0508 00:39:02.397034 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-hostproc" (OuterVolumeSpecName: "hostproc") pod "7e6b0833-a46b-4079-8f93-52198b99baed" (UID: "7e6b0833-a46b-4079-8f93-52198b99baed"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:39:02.397548 kubelet[2602]: I0508 00:39:02.397049 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7e6b0833-a46b-4079-8f93-52198b99baed" (UID: "7e6b0833-a46b-4079-8f93-52198b99baed"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:39:02.397686 kubelet[2602]: I0508 00:39:02.397058 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7e6b0833-a46b-4079-8f93-52198b99baed" (UID: "7e6b0833-a46b-4079-8f93-52198b99baed"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:39:02.397686 kubelet[2602]: I0508 00:39:02.397274 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7e6b0833-a46b-4079-8f93-52198b99baed" (UID: "7e6b0833-a46b-4079-8f93-52198b99baed"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:39:02.400645 kubelet[2602]: I0508 00:39:02.400614 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e6b0833-a46b-4079-8f93-52198b99baed-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7e6b0833-a46b-4079-8f93-52198b99baed" (UID: "7e6b0833-a46b-4079-8f93-52198b99baed"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:39:02.400691 kubelet[2602]: I0508 00:39:02.400656 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-cni-path" (OuterVolumeSpecName: "cni-path") pod "7e6b0833-a46b-4079-8f93-52198b99baed" (UID: "7e6b0833-a46b-4079-8f93-52198b99baed"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:39:02.400691 kubelet[2602]: I0508 00:39:02.400673 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7e6b0833-a46b-4079-8f93-52198b99baed" (UID: "7e6b0833-a46b-4079-8f93-52198b99baed"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:39:02.400691 kubelet[2602]: I0508 00:39:02.400689 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7e6b0833-a46b-4079-8f93-52198b99baed" (UID: "7e6b0833-a46b-4079-8f93-52198b99baed"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:39:02.456033 kubelet[2602]: I0508 00:39:02.455953 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e6b0833-a46b-4079-8f93-52198b99baed-kube-api-access-ns7cb" (OuterVolumeSpecName: "kube-api-access-ns7cb") pod "7e6b0833-a46b-4079-8f93-52198b99baed" (UID: "7e6b0833-a46b-4079-8f93-52198b99baed"). InnerVolumeSpecName "kube-api-access-ns7cb". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:39:02.456033 kubelet[2602]: I0508 00:39:02.455993 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e6b0833-a46b-4079-8f93-52198b99baed-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7e6b0833-a46b-4079-8f93-52198b99baed" (UID: "7e6b0833-a46b-4079-8f93-52198b99baed"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:39:02.456404 kubelet[2602]: I0508 00:39:02.456354 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e6b0833-a46b-4079-8f93-52198b99baed-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7e6b0833-a46b-4079-8f93-52198b99baed" (UID: "7e6b0833-a46b-4079-8f93-52198b99baed"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:39:02.498126 kubelet[2602]: I0508 00:39:02.497967 2602 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7e6b0833-a46b-4079-8f93-52198b99baed-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:39:02.498126 kubelet[2602]: I0508 00:39:02.498009 2602 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-cilium-run\") on node \"localhost\" DevicePath \"\"" May 8 00:39:02.498126 kubelet[2602]: I0508 00:39:02.498022 2602 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 8 00:39:02.498126 kubelet[2602]: I0508 00:39:02.498031 2602 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-cni-path\") on node \"localhost\" DevicePath \"\"" May 8 00:39:02.498126 kubelet[2602]: I0508 00:39:02.498039 2602 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 8 00:39:02.498126 kubelet[2602]: I0508 00:39:02.498049 2602 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-hostproc\") on node \"localhost\" DevicePath \"\"" May 8 00:39:02.498126 kubelet[2602]: I0508 00:39:02.498057 2602 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 8 00:39:02.498126 kubelet[2602]: I0508 00:39:02.498066 2602 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 8 00:39:02.498628 kubelet[2602]: I0508 00:39:02.498075 2602 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ns7cb\" (UniqueName: \"kubernetes.io/projected/7e6b0833-a46b-4079-8f93-52198b99baed-kube-api-access-ns7cb\") on node \"localhost\" DevicePath \"\"" May 8 00:39:02.498628 kubelet[2602]: I0508 00:39:02.498086 2602 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7e6b0833-a46b-4079-8f93-52198b99baed-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 8 00:39:02.498628 kubelet[2602]: I0508 00:39:02.498094 2602 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 8 00:39:02.498628 kubelet[2602]: I0508 00:39:02.498102 2602 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-lib-modules\") on node \"localhost\" DevicePath \"\"" May 8 00:39:02.498628 kubelet[2602]: I0508 00:39:02.498112 2602 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e6b0833-a46b-4079-8f93-52198b99baed-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 8 00:39:02.498628 kubelet[2602]: I0508 00:39:02.498119 2602 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7e6b0833-a46b-4079-8f93-52198b99baed-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 8 00:39:02.498873 containerd[1467]: time="2025-05-08T00:39:02.498286160Z" level=info msg="shim disconnected" id=6e2f9d525c3c53217e7adb2ca54999e202f1251a5f2f4d8b3bd5ca6092933570 namespace=k8s.io May 8 00:39:02.498873 containerd[1467]: time="2025-05-08T00:39:02.498343388Z" level=warning msg="cleaning up after shim disconnected" id=6e2f9d525c3c53217e7adb2ca54999e202f1251a5f2f4d8b3bd5ca6092933570 namespace=k8s.io May 8 00:39:02.498873 containerd[1467]: time="2025-05-08T00:39:02.498352605Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:39:02.513412 containerd[1467]: time="2025-05-08T00:39:02.513361728Z" level=info msg="TearDown network for sandbox \"6e2f9d525c3c53217e7adb2ca54999e202f1251a5f2f4d8b3bd5ca6092933570\" successfully" May 8 00:39:02.513412 containerd[1467]: time="2025-05-08T00:39:02.513401754Z" level=info msg="StopPodSandbox for \"6e2f9d525c3c53217e7adb2ca54999e202f1251a5f2f4d8b3bd5ca6092933570\" returns successfully" May 8 00:39:02.598653 kubelet[2602]: I0508 00:39:02.598604 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee8ef9a6-fa83-4916-a5c4-e5ce80f08a3b-cilium-config-path\") pod \"ee8ef9a6-fa83-4916-a5c4-e5ce80f08a3b\" (UID: \"ee8ef9a6-fa83-4916-a5c4-e5ce80f08a3b\") " May 8 00:39:02.598653 kubelet[2602]: I0508 00:39:02.598650 2602 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg5pk\" (UniqueName: \"kubernetes.io/projected/ee8ef9a6-fa83-4916-a5c4-e5ce80f08a3b-kube-api-access-sg5pk\") pod \"ee8ef9a6-fa83-4916-a5c4-e5ce80f08a3b\" (UID: \"ee8ef9a6-fa83-4916-a5c4-e5ce80f08a3b\") " May 8 00:39:02.599338 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e2f9d525c3c53217e7adb2ca54999e202f1251a5f2f4d8b3bd5ca6092933570-rootfs.mount: Deactivated successfully. May 8 00:39:02.599456 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6e2f9d525c3c53217e7adb2ca54999e202f1251a5f2f4d8b3bd5ca6092933570-shm.mount: Deactivated successfully. May 8 00:39:02.599538 systemd[1]: var-lib-kubelet-pods-7e6b0833\x2da46b\x2d4079\x2d8f93\x2d52198b99baed-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 00:39:02.599644 systemd[1]: var-lib-kubelet-pods-7e6b0833\x2da46b\x2d4079\x2d8f93\x2d52198b99baed-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dns7cb.mount: Deactivated successfully. May 8 00:39:02.599723 systemd[1]: var-lib-kubelet-pods-7e6b0833\x2da46b\x2d4079\x2d8f93\x2d52198b99baed-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 00:39:02.601828 kubelet[2602]: I0508 00:39:02.601761 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee8ef9a6-fa83-4916-a5c4-e5ce80f08a3b-kube-api-access-sg5pk" (OuterVolumeSpecName: "kube-api-access-sg5pk") pod "ee8ef9a6-fa83-4916-a5c4-e5ce80f08a3b" (UID: "ee8ef9a6-fa83-4916-a5c4-e5ce80f08a3b"). InnerVolumeSpecName "kube-api-access-sg5pk". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:39:02.602927 kubelet[2602]: I0508 00:39:02.602891 2602 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee8ef9a6-fa83-4916-a5c4-e5ce80f08a3b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ee8ef9a6-fa83-4916-a5c4-e5ce80f08a3b" (UID: "ee8ef9a6-fa83-4916-a5c4-e5ce80f08a3b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:39:02.603145 systemd[1]: var-lib-kubelet-pods-ee8ef9a6\x2dfa83\x2d4916\x2da5c4\x2de5ce80f08a3b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsg5pk.mount: Deactivated successfully. May 8 00:39:02.699008 kubelet[2602]: I0508 00:39:02.698954 2602 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee8ef9a6-fa83-4916-a5c4-e5ce80f08a3b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:39:02.699008 kubelet[2602]: I0508 00:39:02.698982 2602 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-sg5pk\" (UniqueName: \"kubernetes.io/projected/ee8ef9a6-fa83-4916-a5c4-e5ce80f08a3b-kube-api-access-sg5pk\") on node \"localhost\" DevicePath \"\"" May 8 00:39:02.907892 systemd[1]: Removed slice kubepods-besteffort-podee8ef9a6_fa83_4916_a5c4_e5ce80f08a3b.slice - libcontainer container kubepods-besteffort-podee8ef9a6_fa83_4916_a5c4_e5ce80f08a3b.slice. May 8 00:39:02.909114 systemd[1]: Removed slice kubepods-burstable-pod7e6b0833_a46b_4079_8f93_52198b99baed.slice - libcontainer container kubepods-burstable-pod7e6b0833_a46b_4079_8f93_52198b99baed.slice. May 8 00:39:02.909197 systemd[1]: kubepods-burstable-pod7e6b0833_a46b_4079_8f93_52198b99baed.slice: Consumed 6.864s CPU time. May 8 00:39:03.158869 sshd[4243]: pam_unix(sshd:session): session closed for user core May 8 00:39:03.167558 systemd[1]: sshd@24-10.0.0.50:22-10.0.0.1:39982.service: Deactivated successfully. May 8 00:39:03.169711 systemd[1]: session-25.scope: Deactivated successfully. May 8 00:39:03.171073 systemd-logind[1452]: Session 25 logged out. Waiting for processes to exit. May 8 00:39:03.172361 systemd[1]: Started sshd@25-10.0.0.50:22-10.0.0.1:39990.service - OpenSSH per-connection server daemon (10.0.0.1:39990). May 8 00:39:03.173098 systemd-logind[1452]: Removed session 25. May 8 00:39:03.208084 sshd[4404]: Accepted publickey for core from 10.0.0.1 port 39990 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:39:03.209486 kubelet[2602]: I0508 00:39:03.208924 2602 scope.go:117] "RemoveContainer" containerID="81836ee7e4069fcaeed312c5636ad42eec9d8648ed902051641e387faaa362bf" May 8 00:39:03.210328 sshd[4404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:03.211304 containerd[1467]: time="2025-05-08T00:39:03.210954366Z" level=info msg="RemoveContainer for \"81836ee7e4069fcaeed312c5636ad42eec9d8648ed902051641e387faaa362bf\"" May 8 00:39:03.215699 systemd-logind[1452]: New session 26 of user core. May 8 00:39:03.229716 systemd[1]: Started session-26.scope - Session 26 of User core. May 8 00:39:03.342026 containerd[1467]: time="2025-05-08T00:39:03.341471347Z" level=info msg="RemoveContainer for \"81836ee7e4069fcaeed312c5636ad42eec9d8648ed902051641e387faaa362bf\" returns successfully" May 8 00:39:03.342136 kubelet[2602]: I0508 00:39:03.341848 2602 scope.go:117] "RemoveContainer" containerID="c91b87b281911509265ccd06c6df70a583358f8d3e8fe09e9a268b7a5bf708d5" May 8 00:39:03.343125 containerd[1467]: time="2025-05-08T00:39:03.343104764Z" level=info msg="RemoveContainer for \"c91b87b281911509265ccd06c6df70a583358f8d3e8fe09e9a268b7a5bf708d5\"" May 8 00:39:03.439869 containerd[1467]: time="2025-05-08T00:39:03.439758362Z" level=info msg="RemoveContainer for \"c91b87b281911509265ccd06c6df70a583358f8d3e8fe09e9a268b7a5bf708d5\" returns successfully" May 8 00:39:03.440042 kubelet[2602]: I0508 00:39:03.440016 2602 scope.go:117] "RemoveContainer" containerID="e78673efe73d07700edb8306da6ee3110567430ff8da681300031957c63f8f89" May 8 00:39:03.441085 containerd[1467]: time="2025-05-08T00:39:03.441043520Z" level=info msg="RemoveContainer for \"e78673efe73d07700edb8306da6ee3110567430ff8da681300031957c63f8f89\"" May 8 00:39:03.494789 containerd[1467]: time="2025-05-08T00:39:03.494744687Z" level=info msg="RemoveContainer for \"e78673efe73d07700edb8306da6ee3110567430ff8da681300031957c63f8f89\" returns successfully" May 8 00:39:03.495064 kubelet[2602]: I0508 00:39:03.495035 2602 scope.go:117] "RemoveContainer" containerID="dc27adfb6902191e3e76e9ebcaa00e3440f5c2bbbd6e47d2ce6dbbfd75b7e386" May 8 00:39:03.496660 containerd[1467]: time="2025-05-08T00:39:03.496632166Z" level=info msg="RemoveContainer for \"dc27adfb6902191e3e76e9ebcaa00e3440f5c2bbbd6e47d2ce6dbbfd75b7e386\"" May 8 00:39:03.582033 containerd[1467]: time="2025-05-08T00:39:03.581977858Z" level=info msg="RemoveContainer for \"dc27adfb6902191e3e76e9ebcaa00e3440f5c2bbbd6e47d2ce6dbbfd75b7e386\" returns successfully" May 8 00:39:03.582318 kubelet[2602]: I0508 00:39:03.582268 2602 scope.go:117] "RemoveContainer" containerID="1828a420b43a75a8ccf62f98166a506680f8376983f07838aeac00a35692e091" May 8 00:39:03.583339 containerd[1467]: time="2025-05-08T00:39:03.583320314Z" level=info msg="RemoveContainer for \"1828a420b43a75a8ccf62f98166a506680f8376983f07838aeac00a35692e091\"" May 8 00:39:03.698688 containerd[1467]: time="2025-05-08T00:39:03.698501971Z" level=info msg="RemoveContainer for \"1828a420b43a75a8ccf62f98166a506680f8376983f07838aeac00a35692e091\" returns successfully" May 8 00:39:03.698901 kubelet[2602]: I0508 00:39:03.698855 2602 scope.go:117] "RemoveContainer" containerID="575c0dc2ca3abc933027f3598ff6453eba24348ce01c117d543b2e6335cd6055" May 8 00:39:03.702627 containerd[1467]: time="2025-05-08T00:39:03.702194925Z" level=info msg="RemoveContainer for \"575c0dc2ca3abc933027f3598ff6453eba24348ce01c117d543b2e6335cd6055\"" May 8 00:39:03.710705 containerd[1467]: time="2025-05-08T00:39:03.710635833Z" level=info msg="RemoveContainer for \"575c0dc2ca3abc933027f3598ff6453eba24348ce01c117d543b2e6335cd6055\" returns successfully" May 8 00:39:03.816578 sshd[4404]: pam_unix(sshd:session): session closed for user core May 8 00:39:03.825503 systemd[1]: sshd@25-10.0.0.50:22-10.0.0.1:39990.service: Deactivated successfully. May 8 00:39:03.827817 systemd[1]: session-26.scope: Deactivated successfully. May 8 00:39:03.830567 systemd-logind[1452]: Session 26 logged out. Waiting for processes to exit. May 8 00:39:03.835258 kubelet[2602]: I0508 00:39:03.835199 2602 topology_manager.go:215] "Topology Admit Handler" podUID="8f000406-4923-4dee-bb66-97b7517a3c50" podNamespace="kube-system" podName="cilium-r266j" May 8 00:39:03.835401 kubelet[2602]: E0508 00:39:03.835274 2602 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7e6b0833-a46b-4079-8f93-52198b99baed" containerName="mount-cgroup" May 8 00:39:03.835401 kubelet[2602]: E0508 00:39:03.835287 2602 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ee8ef9a6-fa83-4916-a5c4-e5ce80f08a3b" containerName="cilium-operator" May 8 00:39:03.835401 kubelet[2602]: E0508 00:39:03.835296 2602 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7e6b0833-a46b-4079-8f93-52198b99baed" containerName="cilium-agent" May 8 00:39:03.835401 kubelet[2602]: E0508 00:39:03.835303 2602 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7e6b0833-a46b-4079-8f93-52198b99baed" containerName="apply-sysctl-overwrites" May 8 00:39:03.835401 kubelet[2602]: E0508 00:39:03.835311 2602 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7e6b0833-a46b-4079-8f93-52198b99baed" containerName="mount-bpf-fs" May 8 00:39:03.835401 kubelet[2602]: E0508 00:39:03.835318 2602 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7e6b0833-a46b-4079-8f93-52198b99baed" containerName="clean-cilium-state" May 8 00:39:03.835401 kubelet[2602]: I0508 00:39:03.835343 2602 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e6b0833-a46b-4079-8f93-52198b99baed" containerName="cilium-agent" May 8 00:39:03.835401 kubelet[2602]: I0508 00:39:03.835352 2602 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee8ef9a6-fa83-4916-a5c4-e5ce80f08a3b" containerName="cilium-operator" May 8 00:39:03.836915 systemd[1]: Started sshd@26-10.0.0.50:22-10.0.0.1:39996.service - OpenSSH per-connection server daemon (10.0.0.1:39996). May 8 00:39:03.841119 systemd-logind[1452]: Removed session 26. May 8 00:39:03.852567 systemd[1]: Created slice kubepods-burstable-pod8f000406_4923_4dee_bb66_97b7517a3c50.slice - libcontainer container kubepods-burstable-pod8f000406_4923_4dee_bb66_97b7517a3c50.slice. May 8 00:39:03.869150 sshd[4417]: Accepted publickey for core from 10.0.0.1 port 39996 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:39:03.870791 sshd[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:03.875120 systemd-logind[1452]: New session 27 of user core. May 8 00:39:03.885746 systemd[1]: Started session-27.scope - Session 27 of User core. May 8 00:39:03.905801 kubelet[2602]: I0508 00:39:03.905757 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8f000406-4923-4dee-bb66-97b7517a3c50-hostproc\") pod \"cilium-r266j\" (UID: \"8f000406-4923-4dee-bb66-97b7517a3c50\") " pod="kube-system/cilium-r266j" May 8 00:39:03.905801 kubelet[2602]: I0508 00:39:03.905792 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8f000406-4923-4dee-bb66-97b7517a3c50-cilium-cgroup\") pod \"cilium-r266j\" (UID: \"8f000406-4923-4dee-bb66-97b7517a3c50\") " pod="kube-system/cilium-r266j" May 8 00:39:03.905957 kubelet[2602]: I0508 00:39:03.905813 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8f000406-4923-4dee-bb66-97b7517a3c50-cni-path\") pod \"cilium-r266j\" (UID: \"8f000406-4923-4dee-bb66-97b7517a3c50\") " pod="kube-system/cilium-r266j" May 8 00:39:03.905957 kubelet[2602]: I0508 00:39:03.905828 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8f000406-4923-4dee-bb66-97b7517a3c50-bpf-maps\") pod \"cilium-r266j\" (UID: \"8f000406-4923-4dee-bb66-97b7517a3c50\") " pod="kube-system/cilium-r266j" May 8 00:39:03.905957 kubelet[2602]: I0508 00:39:03.905843 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f000406-4923-4dee-bb66-97b7517a3c50-xtables-lock\") pod \"cilium-r266j\" (UID: \"8f000406-4923-4dee-bb66-97b7517a3c50\") " pod="kube-system/cilium-r266j" May 8 00:39:03.905957 kubelet[2602]: I0508 00:39:03.905858 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8f000406-4923-4dee-bb66-97b7517a3c50-cilium-run\") pod \"cilium-r266j\" (UID: \"8f000406-4923-4dee-bb66-97b7517a3c50\") " pod="kube-system/cilium-r266j" May 8 00:39:03.905957 kubelet[2602]: I0508 00:39:03.905874 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f000406-4923-4dee-bb66-97b7517a3c50-lib-modules\") pod \"cilium-r266j\" (UID: \"8f000406-4923-4dee-bb66-97b7517a3c50\") " pod="kube-system/cilium-r266j" May 8 00:39:03.905957 kubelet[2602]: I0508 00:39:03.905888 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f000406-4923-4dee-bb66-97b7517a3c50-cilium-config-path\") pod \"cilium-r266j\" (UID: \"8f000406-4923-4dee-bb66-97b7517a3c50\") " pod="kube-system/cilium-r266j" May 8 00:39:03.906101 kubelet[2602]: I0508 00:39:03.905903 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8zqj\" (UniqueName: \"kubernetes.io/projected/8f000406-4923-4dee-bb66-97b7517a3c50-kube-api-access-q8zqj\") pod \"cilium-r266j\" (UID: \"8f000406-4923-4dee-bb66-97b7517a3c50\") " pod="kube-system/cilium-r266j" May 8 00:39:03.906101 kubelet[2602]: I0508 00:39:03.905922 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8f000406-4923-4dee-bb66-97b7517a3c50-cilium-ipsec-secrets\") pod \"cilium-r266j\" (UID: \"8f000406-4923-4dee-bb66-97b7517a3c50\") " pod="kube-system/cilium-r266j" May 8 00:39:03.906101 kubelet[2602]: I0508 00:39:03.905987 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8f000406-4923-4dee-bb66-97b7517a3c50-host-proc-sys-kernel\") pod \"cilium-r266j\" (UID: \"8f000406-4923-4dee-bb66-97b7517a3c50\") " pod="kube-system/cilium-r266j" May 8 00:39:03.906101 kubelet[2602]: I0508 00:39:03.906041 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f000406-4923-4dee-bb66-97b7517a3c50-etc-cni-netd\") pod \"cilium-r266j\" (UID: \"8f000406-4923-4dee-bb66-97b7517a3c50\") " pod="kube-system/cilium-r266j" May 8 00:39:03.906101 kubelet[2602]: I0508 00:39:03.906063 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8f000406-4923-4dee-bb66-97b7517a3c50-clustermesh-secrets\") pod \"cilium-r266j\" (UID: \"8f000406-4923-4dee-bb66-97b7517a3c50\") " pod="kube-system/cilium-r266j" May 8 00:39:03.906212 kubelet[2602]: I0508 00:39:03.906082 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8f000406-4923-4dee-bb66-97b7517a3c50-hubble-tls\") pod \"cilium-r266j\" (UID: \"8f000406-4923-4dee-bb66-97b7517a3c50\") " pod="kube-system/cilium-r266j" May 8 00:39:03.906212 kubelet[2602]: I0508 00:39:03.906103 2602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8f000406-4923-4dee-bb66-97b7517a3c50-host-proc-sys-net\") pod \"cilium-r266j\" (UID: \"8f000406-4923-4dee-bb66-97b7517a3c50\") " pod="kube-system/cilium-r266j" May 8 00:39:03.935785 sshd[4417]: pam_unix(sshd:session): session closed for user core May 8 00:39:03.943492 systemd[1]: sshd@26-10.0.0.50:22-10.0.0.1:39996.service: Deactivated successfully. May 8 00:39:03.945353 systemd[1]: session-27.scope: Deactivated successfully. May 8 00:39:03.946807 systemd-logind[1452]: Session 27 logged out. Waiting for processes to exit. May 8 00:39:03.950526 kubelet[2602]: E0508 00:39:03.950415 2602 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:39:03.951838 systemd[1]: Started sshd@27-10.0.0.50:22-10.0.0.1:40010.service - OpenSSH per-connection server daemon (10.0.0.1:40010). May 8 00:39:03.952798 systemd-logind[1452]: Removed session 27. May 8 00:39:03.979996 sshd[4425]: Accepted publickey for core from 10.0.0.1 port 40010 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:39:03.981344 sshd[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:39:03.985111 systemd-logind[1452]: New session 28 of user core. May 8 00:39:03.989725 systemd[1]: Started session-28.scope - Session 28 of User core. May 8 00:39:04.160011 kubelet[2602]: E0508 00:39:04.159944 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:04.160661 containerd[1467]: time="2025-05-08T00:39:04.160579708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r266j,Uid:8f000406-4923-4dee-bb66-97b7517a3c50,Namespace:kube-system,Attempt:0,}" May 8 00:39:04.191570 containerd[1467]: time="2025-05-08T00:39:04.191433739Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:39:04.191700 containerd[1467]: time="2025-05-08T00:39:04.191630161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:39:04.191866 containerd[1467]: time="2025-05-08T00:39:04.191723168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:04.191942 containerd[1467]: time="2025-05-08T00:39:04.191897298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:39:04.214817 systemd[1]: Started cri-containerd-5fa8edb6065c3ba6351e96257c0dc404d93b0c7328ccab7393a9654634cc63dd.scope - libcontainer container 5fa8edb6065c3ba6351e96257c0dc404d93b0c7328ccab7393a9654634cc63dd. May 8 00:39:04.237792 containerd[1467]: time="2025-05-08T00:39:04.237738767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r266j,Uid:8f000406-4923-4dee-bb66-97b7517a3c50,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fa8edb6065c3ba6351e96257c0dc404d93b0c7328ccab7393a9654634cc63dd\"" May 8 00:39:04.238445 kubelet[2602]: E0508 00:39:04.238425 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:04.240099 containerd[1467]: time="2025-05-08T00:39:04.240065279Z" level=info msg="CreateContainer within sandbox \"5fa8edb6065c3ba6351e96257c0dc404d93b0c7328ccab7393a9654634cc63dd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:39:04.266848 containerd[1467]: time="2025-05-08T00:39:04.266796233Z" level=info msg="CreateContainer within sandbox \"5fa8edb6065c3ba6351e96257c0dc404d93b0c7328ccab7393a9654634cc63dd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cd7521705418a3a94e824bc39df83d92fed0ba4c7066ab2955ba01fa535b40b7\"" May 8 00:39:04.267377 containerd[1467]: time="2025-05-08T00:39:04.267348179Z" level=info msg="StartContainer for \"cd7521705418a3a94e824bc39df83d92fed0ba4c7066ab2955ba01fa535b40b7\"" May 8 00:39:04.296740 systemd[1]: Started cri-containerd-cd7521705418a3a94e824bc39df83d92fed0ba4c7066ab2955ba01fa535b40b7.scope - libcontainer container cd7521705418a3a94e824bc39df83d92fed0ba4c7066ab2955ba01fa535b40b7. May 8 00:39:04.323008 containerd[1467]: time="2025-05-08T00:39:04.322959623Z" level=info msg="StartContainer for \"cd7521705418a3a94e824bc39df83d92fed0ba4c7066ab2955ba01fa535b40b7\" returns successfully" May 8 00:39:04.332770 systemd[1]: cri-containerd-cd7521705418a3a94e824bc39df83d92fed0ba4c7066ab2955ba01fa535b40b7.scope: Deactivated successfully. May 8 00:39:04.398337 containerd[1467]: time="2025-05-08T00:39:04.398255820Z" level=info msg="shim disconnected" id=cd7521705418a3a94e824bc39df83d92fed0ba4c7066ab2955ba01fa535b40b7 namespace=k8s.io May 8 00:39:04.398337 containerd[1467]: time="2025-05-08T00:39:04.398319832Z" level=warning msg="cleaning up after shim disconnected" id=cd7521705418a3a94e824bc39df83d92fed0ba4c7066ab2955ba01fa535b40b7 namespace=k8s.io May 8 00:39:04.398337 containerd[1467]: time="2025-05-08T00:39:04.398332025Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:39:04.901682 kubelet[2602]: I0508 00:39:04.901639 2602 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e6b0833-a46b-4079-8f93-52198b99baed" path="/var/lib/kubelet/pods/7e6b0833-a46b-4079-8f93-52198b99baed/volumes" May 8 00:39:04.902491 kubelet[2602]: I0508 00:39:04.902468 2602 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee8ef9a6-fa83-4916-a5c4-e5ce80f08a3b" path="/var/lib/kubelet/pods/ee8ef9a6-fa83-4916-a5c4-e5ce80f08a3b/volumes" May 8 00:39:05.217532 kubelet[2602]: E0508 00:39:05.217412 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:05.219400 containerd[1467]: time="2025-05-08T00:39:05.219338690Z" level=info msg="CreateContainer within sandbox \"5fa8edb6065c3ba6351e96257c0dc404d93b0c7328ccab7393a9654634cc63dd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:39:05.285565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3237190498.mount: Deactivated successfully. May 8 00:39:05.324502 containerd[1467]: time="2025-05-08T00:39:05.324461404Z" level=info msg="CreateContainer within sandbox \"5fa8edb6065c3ba6351e96257c0dc404d93b0c7328ccab7393a9654634cc63dd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c537b02ea77df27306111a0c665d53eb4320d0605f479c5b71a0b4de227e211a\"" May 8 00:39:05.325446 containerd[1467]: time="2025-05-08T00:39:05.325066912Z" level=info msg="StartContainer for \"c537b02ea77df27306111a0c665d53eb4320d0605f479c5b71a0b4de227e211a\"" May 8 00:39:05.355725 systemd[1]: Started cri-containerd-c537b02ea77df27306111a0c665d53eb4320d0605f479c5b71a0b4de227e211a.scope - libcontainer container c537b02ea77df27306111a0c665d53eb4320d0605f479c5b71a0b4de227e211a. May 8 00:39:05.387408 systemd[1]: cri-containerd-c537b02ea77df27306111a0c665d53eb4320d0605f479c5b71a0b4de227e211a.scope: Deactivated successfully. May 8 00:39:05.405767 containerd[1467]: time="2025-05-08T00:39:05.405720481Z" level=info msg="StartContainer for \"c537b02ea77df27306111a0c665d53eb4320d0605f479c5b71a0b4de227e211a\" returns successfully" May 8 00:39:05.463335 containerd[1467]: time="2025-05-08T00:39:05.463267429Z" level=info msg="shim disconnected" id=c537b02ea77df27306111a0c665d53eb4320d0605f479c5b71a0b4de227e211a namespace=k8s.io May 8 00:39:05.463335 containerd[1467]: time="2025-05-08T00:39:05.463327543Z" level=warning msg="cleaning up after shim disconnected" id=c537b02ea77df27306111a0c665d53eb4320d0605f479c5b71a0b4de227e211a namespace=k8s.io May 8 00:39:05.463335 containerd[1467]: time="2025-05-08T00:39:05.463337381Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:39:06.011555 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c537b02ea77df27306111a0c665d53eb4320d0605f479c5b71a0b4de227e211a-rootfs.mount: Deactivated successfully. May 8 00:39:06.221948 kubelet[2602]: E0508 00:39:06.221900 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:06.224094 containerd[1467]: time="2025-05-08T00:39:06.224052029Z" level=info msg="CreateContainer within sandbox \"5fa8edb6065c3ba6351e96257c0dc404d93b0c7328ccab7393a9654634cc63dd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:39:06.345079 containerd[1467]: time="2025-05-08T00:39:06.344928144Z" level=info msg="CreateContainer within sandbox \"5fa8edb6065c3ba6351e96257c0dc404d93b0c7328ccab7393a9654634cc63dd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"71634b954625b9cb1c6e93d4e13dfffe9f227b6862be909c986986d9723945f3\"" May 8 00:39:06.345556 containerd[1467]: time="2025-05-08T00:39:06.345466985Z" level=info msg="StartContainer for \"71634b954625b9cb1c6e93d4e13dfffe9f227b6862be909c986986d9723945f3\"" May 8 00:39:06.375717 systemd[1]: Started cri-containerd-71634b954625b9cb1c6e93d4e13dfffe9f227b6862be909c986986d9723945f3.scope - libcontainer container 71634b954625b9cb1c6e93d4e13dfffe9f227b6862be909c986986d9723945f3. May 8 00:39:06.412805 systemd[1]: cri-containerd-71634b954625b9cb1c6e93d4e13dfffe9f227b6862be909c986986d9723945f3.scope: Deactivated successfully. May 8 00:39:06.416866 containerd[1467]: time="2025-05-08T00:39:06.416816977Z" level=info msg="StartContainer for \"71634b954625b9cb1c6e93d4e13dfffe9f227b6862be909c986986d9723945f3\" returns successfully" May 8 00:39:06.474699 containerd[1467]: time="2025-05-08T00:39:06.474634472Z" level=info msg="shim disconnected" id=71634b954625b9cb1c6e93d4e13dfffe9f227b6862be909c986986d9723945f3 namespace=k8s.io May 8 00:39:06.474699 containerd[1467]: time="2025-05-08T00:39:06.474690478Z" level=warning msg="cleaning up after shim disconnected" id=71634b954625b9cb1c6e93d4e13dfffe9f227b6862be909c986986d9723945f3 namespace=k8s.io May 8 00:39:06.474699 containerd[1467]: time="2025-05-08T00:39:06.474699645Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:39:07.011086 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71634b954625b9cb1c6e93d4e13dfffe9f227b6862be909c986986d9723945f3-rootfs.mount: Deactivated successfully. May 8 00:39:07.225731 kubelet[2602]: E0508 00:39:07.225700 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:07.227369 containerd[1467]: time="2025-05-08T00:39:07.227313231Z" level=info msg="CreateContainer within sandbox \"5fa8edb6065c3ba6351e96257c0dc404d93b0c7328ccab7393a9654634cc63dd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:39:07.241677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1101502215.mount: Deactivated successfully. May 8 00:39:07.250131 containerd[1467]: time="2025-05-08T00:39:07.250084177Z" level=info msg="CreateContainer within sandbox \"5fa8edb6065c3ba6351e96257c0dc404d93b0c7328ccab7393a9654634cc63dd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0289f3113a523506ee2f5d869a70b8d244f220c1c621fac25e0ce8b63b92f4ab\"" May 8 00:39:07.250670 containerd[1467]: time="2025-05-08T00:39:07.250647104Z" level=info msg="StartContainer for \"0289f3113a523506ee2f5d869a70b8d244f220c1c621fac25e0ce8b63b92f4ab\"" May 8 00:39:07.280740 systemd[1]: Started cri-containerd-0289f3113a523506ee2f5d869a70b8d244f220c1c621fac25e0ce8b63b92f4ab.scope - libcontainer container 0289f3113a523506ee2f5d869a70b8d244f220c1c621fac25e0ce8b63b92f4ab. May 8 00:39:07.305331 systemd[1]: cri-containerd-0289f3113a523506ee2f5d869a70b8d244f220c1c621fac25e0ce8b63b92f4ab.scope: Deactivated successfully. May 8 00:39:07.308798 containerd[1467]: time="2025-05-08T00:39:07.308759112Z" level=info msg="StartContainer for \"0289f3113a523506ee2f5d869a70b8d244f220c1c621fac25e0ce8b63b92f4ab\" returns successfully" May 8 00:39:07.333669 containerd[1467]: time="2025-05-08T00:39:07.333605701Z" level=info msg="shim disconnected" id=0289f3113a523506ee2f5d869a70b8d244f220c1c621fac25e0ce8b63b92f4ab namespace=k8s.io May 8 00:39:07.333669 containerd[1467]: time="2025-05-08T00:39:07.333660375Z" level=warning msg="cleaning up after shim disconnected" id=0289f3113a523506ee2f5d869a70b8d244f220c1c621fac25e0ce8b63b92f4ab namespace=k8s.io May 8 00:39:07.333669 containerd[1467]: time="2025-05-08T00:39:07.333669933Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:39:08.011255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0289f3113a523506ee2f5d869a70b8d244f220c1c621fac25e0ce8b63b92f4ab-rootfs.mount: Deactivated successfully. May 8 00:39:08.230208 kubelet[2602]: E0508 00:39:08.230172 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:08.231983 containerd[1467]: time="2025-05-08T00:39:08.231942664Z" level=info msg="CreateContainer within sandbox \"5fa8edb6065c3ba6351e96257c0dc404d93b0c7328ccab7393a9654634cc63dd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:39:08.251449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1762657365.mount: Deactivated successfully. May 8 00:39:08.253944 containerd[1467]: time="2025-05-08T00:39:08.253899980Z" level=info msg="CreateContainer within sandbox \"5fa8edb6065c3ba6351e96257c0dc404d93b0c7328ccab7393a9654634cc63dd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ab45f3c73a6a89376698ffbbfeb011281f570d5a09ac134daba486bf844c202b\"" May 8 00:39:08.254460 containerd[1467]: time="2025-05-08T00:39:08.254427017Z" level=info msg="StartContainer for \"ab45f3c73a6a89376698ffbbfeb011281f570d5a09ac134daba486bf844c202b\"" May 8 00:39:08.280736 systemd[1]: Started cri-containerd-ab45f3c73a6a89376698ffbbfeb011281f570d5a09ac134daba486bf844c202b.scope - libcontainer container ab45f3c73a6a89376698ffbbfeb011281f570d5a09ac134daba486bf844c202b. May 8 00:39:08.310369 containerd[1467]: time="2025-05-08T00:39:08.310323917Z" level=info msg="StartContainer for \"ab45f3c73a6a89376698ffbbfeb011281f570d5a09ac134daba486bf844c202b\" returns successfully" May 8 00:39:08.712639 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 8 00:39:08.743618 kernel: jitterentropy: Initialization failed with host not compliant with requirements: 9 May 8 00:39:08.765612 kernel: DRBG: Continuing without Jitter RNG May 8 00:39:09.234489 kubelet[2602]: E0508 00:39:09.234463 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:09.247941 kubelet[2602]: I0508 00:39:09.247879 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r266j" podStartSLOduration=6.247860578 podStartE2EDuration="6.247860578s" podCreationTimestamp="2025-05-08 00:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:39:09.2476296 +0000 UTC m=+90.425200024" watchObservedRunningTime="2025-05-08 00:39:09.247860578 +0000 UTC m=+90.425430992" May 8 00:39:10.236609 kubelet[2602]: E0508 00:39:10.236554 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:11.692522 systemd-networkd[1403]: lxc_health: Link UP May 8 00:39:11.704536 systemd-networkd[1403]: lxc_health: Gained carrier May 8 00:39:11.899217 kubelet[2602]: E0508 00:39:11.899168 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:12.176414 kubelet[2602]: E0508 00:39:12.175967 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:12.238944 kubelet[2602]: E0508 00:39:12.238914 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:13.240924 kubelet[2602]: E0508 00:39:13.240887 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:39:13.728898 systemd-networkd[1403]: lxc_health: Gained IPv6LL May 8 00:39:16.732139 sshd[4425]: pam_unix(sshd:session): session closed for user core May 8 00:39:16.736736 systemd[1]: sshd@27-10.0.0.50:22-10.0.0.1:40010.service: Deactivated successfully. May 8 00:39:16.739391 systemd[1]: session-28.scope: Deactivated successfully. May 8 00:39:16.740091 systemd-logind[1452]: Session 28 logged out. Waiting for processes to exit. May 8 00:39:16.741100 systemd-logind[1452]: Removed session 28. May 8 00:39:16.899261 kubelet[2602]: E0508 00:39:16.899189 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"