May 13 00:21:50.879203 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon May 12 22:46:21 -00 2025 May 13 00:21:50.879224 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 00:21:50.879235 kernel: BIOS-provided physical RAM map: May 13 00:21:50.879241 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 13 00:21:50.879247 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 13 00:21:50.879253 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 13 00:21:50.879260 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 13 00:21:50.879267 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 13 00:21:50.879273 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable May 13 00:21:50.879279 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 13 00:21:50.879288 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable May 13 00:21:50.879294 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved May 13 00:21:50.879300 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 May 13 00:21:50.879306 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved May 13 00:21:50.879314 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 13 00:21:50.879321 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 13 00:21:50.879329 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 13 00:21:50.879336 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 13 00:21:50.879343 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 13 00:21:50.879349 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 13 00:21:50.879356 kernel: NX (Execute Disable) protection: active May 13 00:21:50.879362 kernel: APIC: Static calls initialized May 13 00:21:50.879369 kernel: efi: EFI v2.7 by EDK II May 13 00:21:50.879376 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 May 13 00:21:50.879382 kernel: SMBIOS 2.8 present. May 13 00:21:50.879389 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 May 13 00:21:50.879395 kernel: Hypervisor detected: KVM May 13 00:21:50.879404 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 00:21:50.879411 kernel: kvm-clock: using sched offset of 3992596490 cycles May 13 00:21:50.879418 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 00:21:50.879425 kernel: tsc: Detected 2794.748 MHz processor May 13 00:21:50.879432 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 00:21:50.879440 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 00:21:50.879446 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 May 13 00:21:50.879453 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 13 00:21:50.879460 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 00:21:50.879469 kernel: Using GB pages for direct mapping May 13 00:21:50.879476 kernel: Secure boot disabled May 13 00:21:50.879483 kernel: ACPI: Early table checksum verification disabled May 13 00:21:50.879490 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 13 00:21:50.879500 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 13 00:21:50.879507 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:21:50.879514 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:21:50.879523 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 13 00:21:50.879531 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:21:50.879538 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:21:50.879545 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:21:50.879552 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:21:50.879559 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 13 00:21:50.879566 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 13 00:21:50.879575 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 13 00:21:50.879583 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 13 00:21:50.879590 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 13 00:21:50.879597 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 13 00:21:50.879604 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 13 00:21:50.879611 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 13 00:21:50.879618 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 13 00:21:50.879624 kernel: No NUMA configuration found May 13 00:21:50.879632 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] May 13 00:21:50.879641 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] May 13 00:21:50.879648 kernel: Zone ranges: May 13 00:21:50.879655 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 00:21:50.879662 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] May 13 00:21:50.879669 kernel: Normal empty May 13 00:21:50.879676 kernel: Movable zone start for each node May 13 00:21:50.879683 kernel: Early memory node ranges May 13 00:21:50.879690 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 13 00:21:50.879697 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 13 00:21:50.879704 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 13 00:21:50.879713 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] May 13 00:21:50.879721 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] May 13 00:21:50.879728 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] May 13 00:21:50.879735 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] May 13 00:21:50.879742 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 00:21:50.879749 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 13 00:21:50.879756 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 13 00:21:50.879763 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 00:21:50.879770 kernel: On node 0, zone DMA: 240 pages in unavailable ranges May 13 00:21:50.879779 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 13 00:21:50.879786 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges May 13 00:21:50.879808 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 00:21:50.879816 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 00:21:50.879823 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 00:21:50.879830 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 00:21:50.879837 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 00:21:50.879844 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 00:21:50.879851 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 00:21:50.879858 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 00:21:50.879868 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 00:21:50.879875 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 00:21:50.879882 kernel: TSC deadline timer available May 13 00:21:50.879889 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 13 00:21:50.879896 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 13 00:21:50.879903 kernel: kvm-guest: KVM setup pv remote TLB flush May 13 00:21:50.879910 kernel: kvm-guest: setup PV sched yield May 13 00:21:50.879917 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 13 00:21:50.879924 kernel: Booting paravirtualized kernel on KVM May 13 00:21:50.879934 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 00:21:50.879941 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 13 00:21:50.879948 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 13 00:21:50.879955 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 13 00:21:50.879962 kernel: pcpu-alloc: [0] 0 1 2 3 May 13 00:21:50.879969 kernel: kvm-guest: PV spinlocks enabled May 13 00:21:50.879976 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 13 00:21:50.879985 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 00:21:50.879995 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:21:50.880002 kernel: random: crng init done May 13 00:21:50.880009 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:21:50.880016 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:21:50.880023 kernel: Fallback order for Node 0: 0 May 13 00:21:50.880030 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 May 13 00:21:50.880037 kernel: Policy zone: DMA32 May 13 00:21:50.880045 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:21:50.880052 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 166140K reserved, 0K cma-reserved) May 13 00:21:50.880062 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:21:50.880069 kernel: ftrace: allocating 37944 entries in 149 pages May 13 00:21:50.880076 kernel: ftrace: allocated 149 pages with 4 groups May 13 00:21:50.880083 kernel: Dynamic Preempt: voluntary May 13 00:21:50.880098 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 00:21:50.880109 kernel: rcu: RCU event tracing is enabled. May 13 00:21:50.880117 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:21:50.880125 kernel: Trampoline variant of Tasks RCU enabled. May 13 00:21:50.880141 kernel: Rude variant of Tasks RCU enabled. May 13 00:21:50.880148 kernel: Tracing variant of Tasks RCU enabled. May 13 00:21:50.880156 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:21:50.880163 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:21:50.880174 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 13 00:21:50.880181 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 00:21:50.880189 kernel: Console: colour dummy device 80x25 May 13 00:21:50.880196 kernel: printk: console [ttyS0] enabled May 13 00:21:50.880203 kernel: ACPI: Core revision 20230628 May 13 00:21:50.880213 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 13 00:21:50.880221 kernel: APIC: Switch to symmetric I/O mode setup May 13 00:21:50.880228 kernel: x2apic enabled May 13 00:21:50.880236 kernel: APIC: Switched APIC routing to: physical x2apic May 13 00:21:50.880243 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 13 00:21:50.880251 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 13 00:21:50.880259 kernel: kvm-guest: setup PV IPIs May 13 00:21:50.880266 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 00:21:50.880274 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 00:21:50.880284 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 13 00:21:50.880291 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 13 00:21:50.880299 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 13 00:21:50.880306 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 13 00:21:50.880314 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 00:21:50.880321 kernel: Spectre V2 : Mitigation: Retpolines May 13 00:21:50.880329 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 00:21:50.880336 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 13 00:21:50.880344 kernel: RETBleed: Mitigation: untrained return thunk May 13 00:21:50.880353 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 00:21:50.880361 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 13 00:21:50.880369 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 13 00:21:50.880377 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 13 00:21:50.880384 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 13 00:21:50.880392 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 00:21:50.880399 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 00:21:50.880407 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 00:21:50.880416 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 00:21:50.880424 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 13 00:21:50.880431 kernel: Freeing SMP alternatives memory: 32K May 13 00:21:50.880439 kernel: pid_max: default: 32768 minimum: 301 May 13 00:21:50.880446 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 00:21:50.880454 kernel: landlock: Up and running. May 13 00:21:50.880461 kernel: SELinux: Initializing. May 13 00:21:50.880469 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:21:50.880477 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:21:50.880486 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 13 00:21:50.880494 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:21:50.880502 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:21:50.880509 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:21:50.880517 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 13 00:21:50.880524 kernel: ... version: 0 May 13 00:21:50.880531 kernel: ... bit width: 48 May 13 00:21:50.880539 kernel: ... generic registers: 6 May 13 00:21:50.880546 kernel: ... value mask: 0000ffffffffffff May 13 00:21:50.880556 kernel: ... max period: 00007fffffffffff May 13 00:21:50.880563 kernel: ... fixed-purpose events: 0 May 13 00:21:50.880571 kernel: ... event mask: 000000000000003f May 13 00:21:50.880578 kernel: signal: max sigframe size: 1776 May 13 00:21:50.880586 kernel: rcu: Hierarchical SRCU implementation. May 13 00:21:50.880593 kernel: rcu: Max phase no-delay instances is 400. May 13 00:21:50.880601 kernel: smp: Bringing up secondary CPUs ... May 13 00:21:50.880608 kernel: smpboot: x86: Booting SMP configuration: May 13 00:21:50.880615 kernel: .... node #0, CPUs: #1 #2 #3 May 13 00:21:50.880625 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:21:50.880632 kernel: smpboot: Max logical packages: 1 May 13 00:21:50.880640 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 13 00:21:50.880647 kernel: devtmpfs: initialized May 13 00:21:50.880655 kernel: x86/mm: Memory block size: 128MB May 13 00:21:50.880662 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 13 00:21:50.880670 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 13 00:21:50.880677 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) May 13 00:21:50.880685 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 13 00:21:50.880694 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 13 00:21:50.880702 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:21:50.880710 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:21:50.880717 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:21:50.880724 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:21:50.880732 kernel: audit: initializing netlink subsys (disabled) May 13 00:21:50.880739 kernel: audit: type=2000 audit(1747095710.605:1): state=initialized audit_enabled=0 res=1 May 13 00:21:50.880747 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:21:50.880754 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 00:21:50.880764 kernel: cpuidle: using governor menu May 13 00:21:50.880771 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:21:50.880778 kernel: dca service started, version 1.12.1 May 13 00:21:50.880786 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 13 00:21:50.880804 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 13 00:21:50.880812 kernel: PCI: Using configuration type 1 for base access May 13 00:21:50.880819 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 00:21:50.880827 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:21:50.880834 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 13 00:21:50.880844 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:21:50.880852 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 00:21:50.880859 kernel: ACPI: Added _OSI(Module Device) May 13 00:21:50.880867 kernel: ACPI: Added _OSI(Processor Device) May 13 00:21:50.880874 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:21:50.880881 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:21:50.880889 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:21:50.880896 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 13 00:21:50.880904 kernel: ACPI: Interpreter enabled May 13 00:21:50.880913 kernel: ACPI: PM: (supports S0 S3 S5) May 13 00:21:50.880921 kernel: ACPI: Using IOAPIC for interrupt routing May 13 00:21:50.880928 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 00:21:50.880936 kernel: PCI: Using E820 reservations for host bridge windows May 13 00:21:50.880943 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 13 00:21:50.880951 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:21:50.881128 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:21:50.881273 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 13 00:21:50.881404 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 13 00:21:50.881414 kernel: PCI host bridge to bus 0000:00 May 13 00:21:50.881543 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 00:21:50.881654 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 00:21:50.881763 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 00:21:50.881890 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 13 00:21:50.881999 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 00:21:50.882113 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] May 13 00:21:50.882234 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:21:50.882369 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 13 00:21:50.882499 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 13 00:21:50.882619 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 13 00:21:50.882738 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 13 00:21:50.882971 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 13 00:21:50.883090 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 13 00:21:50.883218 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 00:21:50.883346 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:21:50.883466 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 13 00:21:50.883584 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 13 00:21:50.883704 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] May 13 00:21:50.883849 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 13 00:21:50.883996 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 13 00:21:50.884152 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 13 00:21:50.884274 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] May 13 00:21:50.884401 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 13 00:21:50.884520 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 13 00:21:50.884643 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 13 00:21:50.884762 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] May 13 00:21:50.884905 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 13 00:21:50.885034 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 13 00:21:50.885163 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 13 00:21:50.885292 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 13 00:21:50.885416 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 13 00:21:50.885540 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 13 00:21:50.885675 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 13 00:21:50.885813 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 13 00:21:50.885824 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 00:21:50.885832 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 00:21:50.885840 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 00:21:50.885848 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 00:21:50.885856 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 13 00:21:50.885867 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 13 00:21:50.885875 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 13 00:21:50.885882 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 13 00:21:50.885890 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 13 00:21:50.885898 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 13 00:21:50.885905 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 13 00:21:50.885913 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 13 00:21:50.885920 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 13 00:21:50.885928 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 13 00:21:50.885938 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 13 00:21:50.885945 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 13 00:21:50.885953 kernel: iommu: Default domain type: Translated May 13 00:21:50.885961 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 00:21:50.885968 kernel: efivars: Registered efivars operations May 13 00:21:50.885976 kernel: PCI: Using ACPI for IRQ routing May 13 00:21:50.885983 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 00:21:50.885991 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 13 00:21:50.885998 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] May 13 00:21:50.886008 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] May 13 00:21:50.886015 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] May 13 00:21:50.886145 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 13 00:21:50.886264 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 13 00:21:50.886392 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 00:21:50.886402 kernel: vgaarb: loaded May 13 00:21:50.886410 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 13 00:21:50.886418 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 13 00:21:50.886425 kernel: clocksource: Switched to clocksource kvm-clock May 13 00:21:50.886436 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:21:50.886444 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:21:50.886451 kernel: pnp: PnP ACPI init May 13 00:21:50.886581 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 13 00:21:50.886592 kernel: pnp: PnP ACPI: found 6 devices May 13 00:21:50.886600 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 00:21:50.886607 kernel: NET: Registered PF_INET protocol family May 13 00:21:50.886615 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:21:50.886626 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:21:50.886633 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:21:50.886641 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:21:50.886649 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 00:21:50.886656 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:21:50.886664 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:21:50.886671 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:21:50.886679 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:21:50.886686 kernel: NET: Registered PF_XDP protocol family May 13 00:21:50.886861 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 13 00:21:50.886983 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 13 00:21:50.887090 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 00:21:50.887208 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 00:21:50.887315 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 00:21:50.887422 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 13 00:21:50.887528 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 13 00:21:50.887634 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] May 13 00:21:50.887648 kernel: PCI: CLS 0 bytes, default 64 May 13 00:21:50.887655 kernel: Initialise system trusted keyrings May 13 00:21:50.887663 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:21:50.887671 kernel: Key type asymmetric registered May 13 00:21:50.887678 kernel: Asymmetric key parser 'x509' registered May 13 00:21:50.887685 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 13 00:21:50.887693 kernel: io scheduler mq-deadline registered May 13 00:21:50.887700 kernel: io scheduler kyber registered May 13 00:21:50.887708 kernel: io scheduler bfq registered May 13 00:21:50.887718 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 00:21:50.887726 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 13 00:21:50.887734 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 13 00:21:50.887742 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 13 00:21:50.887749 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:21:50.887757 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 00:21:50.887764 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 00:21:50.887772 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 00:21:50.887780 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 00:21:50.887918 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 00:21:50.888032 kernel: rtc_cmos 00:04: registered as rtc0 May 13 00:21:50.888156 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T00:21:50 UTC (1747095710) May 13 00:21:50.888167 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 00:21:50.888278 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 13 00:21:50.888288 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 13 00:21:50.888295 kernel: efifb: probing for efifb May 13 00:21:50.888303 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k May 13 00:21:50.888315 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 May 13 00:21:50.888322 kernel: efifb: scrolling: redraw May 13 00:21:50.888330 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 May 13 00:21:50.888338 kernel: Console: switching to colour frame buffer device 100x37 May 13 00:21:50.888345 kernel: fb0: EFI VGA frame buffer device May 13 00:21:50.888371 kernel: pstore: Using crash dump compression: deflate May 13 00:21:50.888382 kernel: pstore: Registered efi_pstore as persistent store backend May 13 00:21:50.888392 kernel: NET: Registered PF_INET6 protocol family May 13 00:21:50.888400 kernel: Segment Routing with IPv6 May 13 00:21:50.888412 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:21:50.888420 kernel: NET: Registered PF_PACKET protocol family May 13 00:21:50.888428 kernel: Key type dns_resolver registered May 13 00:21:50.888436 kernel: IPI shorthand broadcast: enabled May 13 00:21:50.888444 kernel: sched_clock: Marking stable (619003080, 119626008)->(755356743, -16727655) May 13 00:21:50.888452 kernel: registered taskstats version 1 May 13 00:21:50.888460 kernel: Loading compiled-in X.509 certificates May 13 00:21:50.888468 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: b404fdaaed18d29adfca671c3bbb23eee96fb08f' May 13 00:21:50.888475 kernel: Key type .fscrypt registered May 13 00:21:50.888485 kernel: Key type fscrypt-provisioning registered May 13 00:21:50.888493 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:21:50.888501 kernel: ima: Allocated hash algorithm: sha1 May 13 00:21:50.888509 kernel: ima: No architecture policies found May 13 00:21:50.888516 kernel: clk: Disabling unused clocks May 13 00:21:50.888524 kernel: Freeing unused kernel image (initmem) memory: 42864K May 13 00:21:50.888532 kernel: Write protecting the kernel read-only data: 36864k May 13 00:21:50.888540 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 13 00:21:50.888550 kernel: Run /init as init process May 13 00:21:50.888558 kernel: with arguments: May 13 00:21:50.888565 kernel: /init May 13 00:21:50.888573 kernel: with environment: May 13 00:21:50.888580 kernel: HOME=/ May 13 00:21:50.888588 kernel: TERM=linux May 13 00:21:50.888596 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:21:50.888606 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:21:50.888618 systemd[1]: Detected virtualization kvm. May 13 00:21:50.888626 systemd[1]: Detected architecture x86-64. May 13 00:21:50.888634 systemd[1]: Running in initrd. May 13 00:21:50.888643 systemd[1]: No hostname configured, using default hostname. May 13 00:21:50.888653 systemd[1]: Hostname set to . May 13 00:21:50.888664 systemd[1]: Initializing machine ID from VM UUID. May 13 00:21:50.888672 systemd[1]: Queued start job for default target initrd.target. May 13 00:21:50.888680 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:21:50.888689 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:21:50.888698 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 00:21:50.888706 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:21:50.888715 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 00:21:50.888726 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 00:21:50.888738 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 00:21:50.888747 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 00:21:50.888755 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:21:50.888763 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:21:50.888772 systemd[1]: Reached target paths.target - Path Units. May 13 00:21:50.888780 systemd[1]: Reached target slices.target - Slice Units. May 13 00:21:50.888788 systemd[1]: Reached target swap.target - Swaps. May 13 00:21:50.888866 systemd[1]: Reached target timers.target - Timer Units. May 13 00:21:50.888874 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:21:50.888883 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:21:50.888891 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 00:21:50.888899 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 13 00:21:50.888908 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:21:50.888916 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:21:50.888924 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:21:50.888935 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:21:50.888944 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 00:21:50.888952 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:21:50.888960 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 00:21:50.888968 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:21:50.888977 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:21:50.888985 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:21:50.888993 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:21:50.889002 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 00:21:50.889013 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:21:50.889021 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:21:50.889030 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:21:50.889055 systemd-journald[190]: Collecting audit messages is disabled. May 13 00:21:50.889076 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:21:50.889085 systemd-journald[190]: Journal started May 13 00:21:50.889105 systemd-journald[190]: Runtime Journal (/run/log/journal/af040fcf5b9146f59a30cfd0f6ab7759) is 6.0M, max 48.3M, 42.2M free. May 13 00:21:50.892363 systemd-modules-load[193]: Inserted module 'overlay' May 13 00:21:50.897015 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:21:50.899820 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:21:50.900520 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:21:50.907971 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:21:50.909562 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:21:50.912260 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:21:50.923477 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:21:50.928527 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:21:50.935813 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:21:50.937850 kernel: Bridge firewalling registered May 13 00:21:50.937838 systemd-modules-load[193]: Inserted module 'br_netfilter' May 13 00:21:50.938015 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 00:21:50.941941 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:21:50.943406 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:21:50.953550 dracut-cmdline[222]: dracut-dracut-053 May 13 00:21:50.955473 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:21:50.958334 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 00:21:50.963939 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:21:50.994623 systemd-resolved[241]: Positive Trust Anchors: May 13 00:21:50.994642 systemd-resolved[241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:21:50.994675 systemd-resolved[241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:21:50.997206 systemd-resolved[241]: Defaulting to hostname 'linux'. May 13 00:21:50.998272 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:21:51.003940 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:21:51.061839 kernel: SCSI subsystem initialized May 13 00:21:51.070822 kernel: Loading iSCSI transport class v2.0-870. May 13 00:21:51.080817 kernel: iscsi: registered transport (tcp) May 13 00:21:51.102822 kernel: iscsi: registered transport (qla4xxx) May 13 00:21:51.102845 kernel: QLogic iSCSI HBA Driver May 13 00:21:51.152111 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 00:21:51.167959 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 00:21:51.193653 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:21:51.193742 kernel: device-mapper: uevent: version 1.0.3 May 13 00:21:51.193754 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 00:21:51.238833 kernel: raid6: avx2x4 gen() 25340 MB/s May 13 00:21:51.255819 kernel: raid6: avx2x2 gen() 26535 MB/s May 13 00:21:51.273051 kernel: raid6: avx2x1 gen() 23315 MB/s May 13 00:21:51.273079 kernel: raid6: using algorithm avx2x2 gen() 26535 MB/s May 13 00:21:51.290945 kernel: raid6: .... xor() 14564 MB/s, rmw enabled May 13 00:21:51.291004 kernel: raid6: using avx2x2 recovery algorithm May 13 00:21:51.312833 kernel: xor: automatically using best checksumming function avx May 13 00:21:51.476849 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 00:21:51.490167 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 00:21:51.502962 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:21:51.515543 systemd-udevd[414]: Using default interface naming scheme 'v255'. May 13 00:21:51.520133 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:21:51.533053 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 00:21:51.547103 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation May 13 00:21:51.582206 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:21:51.596051 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:21:51.662659 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:21:51.671971 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 00:21:51.686494 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 00:21:51.689544 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:21:51.692342 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:21:51.693348 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:21:51.704999 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 00:21:51.711834 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 13 00:21:51.712050 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:21:51.715816 kernel: cryptd: max_cpu_qlen set to 1000 May 13 00:21:51.720549 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 00:21:51.728872 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:21:51.728909 kernel: GPT:9289727 != 19775487 May 13 00:21:51.728920 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:21:51.731588 kernel: GPT:9289727 != 19775487 May 13 00:21:51.731616 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:21:51.731627 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:21:51.732870 kernel: AVX2 version of gcm_enc/dec engaged. May 13 00:21:51.733817 kernel: AES CTR mode by8 optimization enabled May 13 00:21:51.737098 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:21:51.737331 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:21:51.741013 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:21:51.745171 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:21:51.745407 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:21:51.748029 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:21:51.756829 kernel: libata version 3.00 loaded. May 13 00:21:51.760174 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:21:51.766892 kernel: BTRFS: device fsid b9c18834-b687-45d3-9868-9ac29dc7ddd7 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (473) May 13 00:21:51.769619 kernel: ahci 0000:00:1f.2: version 3.0 May 13 00:21:51.774177 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 13 00:21:51.774193 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (474) May 13 00:21:51.774209 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 13 00:21:51.775259 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 13 00:21:51.783831 kernel: scsi host0: ahci May 13 00:21:51.785816 kernel: scsi host1: ahci May 13 00:21:51.787832 kernel: scsi host2: ahci May 13 00:21:51.787506 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:21:51.790816 kernel: scsi host3: ahci May 13 00:21:51.792819 kernel: scsi host4: ahci May 13 00:21:51.795021 kernel: scsi host5: ahci May 13 00:21:51.795209 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 13 00:21:51.795221 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 13 00:21:51.796810 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 13 00:21:51.796969 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 00:21:51.803262 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 13 00:21:51.803285 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 13 00:21:51.803295 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 13 00:21:51.813332 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 00:21:51.821988 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 00:21:51.825436 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 00:21:51.832443 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:21:51.845001 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 00:21:51.848381 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:21:51.854404 disk-uuid[564]: Primary Header is updated. May 13 00:21:51.854404 disk-uuid[564]: Secondary Entries is updated. May 13 00:21:51.854404 disk-uuid[564]: Secondary Header is updated. May 13 00:21:51.858819 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:21:51.863853 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:21:51.866229 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:21:52.113707 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 13 00:21:52.113784 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 13 00:21:52.113829 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 13 00:21:52.113844 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 13 00:21:52.114825 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 13 00:21:52.115825 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 13 00:21:52.116828 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 13 00:21:52.116848 kernel: ata3.00: applying bridge limits May 13 00:21:52.117875 kernel: ata3.00: configured for UDMA/100 May 13 00:21:52.118832 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 13 00:21:52.157830 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 13 00:21:52.158065 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 00:21:52.175828 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 13 00:21:52.864812 disk-uuid[569]: The operation has completed successfully. May 13 00:21:52.866045 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:21:52.895770 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:21:52.895931 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 00:21:52.923985 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 00:21:52.929762 sh[590]: Success May 13 00:21:52.942816 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 13 00:21:52.979154 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 00:21:52.992329 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 00:21:52.995312 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 00:21:53.010447 kernel: BTRFS info (device dm-0): first mount of filesystem b9c18834-b687-45d3-9868-9ac29dc7ddd7 May 13 00:21:53.010501 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 00:21:53.010516 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 00:21:53.011504 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 00:21:53.012246 kernel: BTRFS info (device dm-0): using free space tree May 13 00:21:53.017370 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 00:21:53.018830 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 00:21:53.022926 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 00:21:53.024899 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 00:21:53.035567 kernel: BTRFS info (device vda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:21:53.035624 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:21:53.035639 kernel: BTRFS info (device vda6): using free space tree May 13 00:21:53.038823 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:21:53.047709 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:21:53.049504 kernel: BTRFS info (device vda6): last unmount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:21:53.059848 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 00:21:53.069100 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 00:21:53.138570 ignition[682]: Ignition 2.19.0 May 13 00:21:53.139102 ignition[682]: Stage: fetch-offline May 13 00:21:53.139142 ignition[682]: no configs at "/usr/lib/ignition/base.d" May 13 00:21:53.139164 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:21:53.139266 ignition[682]: parsed url from cmdline: "" May 13 00:21:53.139270 ignition[682]: no config URL provided May 13 00:21:53.139275 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:21:53.139284 ignition[682]: no config at "/usr/lib/ignition/user.ign" May 13 00:21:53.139311 ignition[682]: op(1): [started] loading QEMU firmware config module May 13 00:21:53.139316 ignition[682]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:21:53.147197 ignition[682]: op(1): [finished] loading QEMU firmware config module May 13 00:21:53.148066 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:21:53.158036 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:21:53.182103 systemd-networkd[779]: lo: Link UP May 13 00:21:53.182114 systemd-networkd[779]: lo: Gained carrier May 13 00:21:53.185181 systemd-networkd[779]: Enumeration completed May 13 00:21:53.186090 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:21:53.186201 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:21:53.186205 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:21:53.187228 systemd-networkd[779]: eth0: Link UP May 13 00:21:53.187232 systemd-networkd[779]: eth0: Gained carrier May 13 00:21:53.187238 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:21:53.196090 systemd[1]: Reached target network.target - Network. May 13 00:21:53.206024 ignition[682]: parsing config with SHA512: e4fbae2460b1efddc49ffbe4206f302fad26a1336c7d29569da9502da34b3d515baa76dc4d59fc692aeed57699ca1609d60498c2169ca4c65aefd509f256c20e May 13 00:21:53.207881 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.45/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:21:53.212226 unknown[682]: fetched base config from "system" May 13 00:21:53.212238 unknown[682]: fetched user config from "qemu" May 13 00:21:53.212675 ignition[682]: fetch-offline: fetch-offline passed May 13 00:21:53.212741 ignition[682]: Ignition finished successfully May 13 00:21:53.218300 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:21:53.220892 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:21:53.228039 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 00:21:53.249014 ignition[783]: Ignition 2.19.0 May 13 00:21:53.249026 ignition[783]: Stage: kargs May 13 00:21:53.249252 ignition[783]: no configs at "/usr/lib/ignition/base.d" May 13 00:21:53.249266 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:21:53.253371 ignition[783]: kargs: kargs passed May 13 00:21:53.253428 ignition[783]: Ignition finished successfully May 13 00:21:53.257809 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 00:21:53.278958 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 00:21:53.296208 ignition[791]: Ignition 2.19.0 May 13 00:21:53.296223 ignition[791]: Stage: disks May 13 00:21:53.296393 ignition[791]: no configs at "/usr/lib/ignition/base.d" May 13 00:21:53.296405 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:21:53.297264 ignition[791]: disks: disks passed May 13 00:21:53.297319 ignition[791]: Ignition finished successfully May 13 00:21:53.303177 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 00:21:53.303704 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 00:21:53.305545 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 00:21:53.306039 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:21:53.306389 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:21:53.306712 systemd[1]: Reached target basic.target - Basic System. May 13 00:21:53.330128 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 00:21:53.344692 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 00:21:53.352053 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 00:21:53.365019 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 00:21:53.455818 kernel: EXT4-fs (vda9): mounted filesystem 422ad498-4f61-405b-9d71-25f19459d196 r/w with ordered data mode. Quota mode: none. May 13 00:21:53.455904 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 00:21:53.458056 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 00:21:53.469880 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:21:53.472583 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 00:21:53.475176 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 00:21:53.477951 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (810) May 13 00:21:53.475225 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:21:53.485167 kernel: BTRFS info (device vda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:21:53.485198 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:21:53.485214 kernel: BTRFS info (device vda6): using free space tree May 13 00:21:53.485238 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:21:53.475247 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:21:53.487568 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 00:21:53.488852 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:21:53.492152 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 00:21:53.533327 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:21:53.537383 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory May 13 00:21:53.541176 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:21:53.544907 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:21:53.632681 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 00:21:53.650031 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 00:21:53.652074 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 00:21:53.658824 kernel: BTRFS info (device vda6): last unmount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:21:53.677424 ignition[924]: INFO : Ignition 2.19.0 May 13 00:21:53.677424 ignition[924]: INFO : Stage: mount May 13 00:21:53.681448 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:21:53.681448 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:21:53.681448 ignition[924]: INFO : mount: mount passed May 13 00:21:53.681448 ignition[924]: INFO : Ignition finished successfully May 13 00:21:53.677714 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 00:21:53.679853 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 00:21:53.692617 systemd-resolved[241]: Detected conflict on linux IN A 10.0.0.45 May 13 00:21:53.692632 systemd-resolved[241]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. May 13 00:21:53.692923 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 00:21:54.009708 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 00:21:54.023002 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:21:54.030174 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (938) May 13 00:21:54.030201 kernel: BTRFS info (device vda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 00:21:54.030213 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:21:54.031147 kernel: BTRFS info (device vda6): using free space tree May 13 00:21:54.034813 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:21:54.035606 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:21:54.062117 ignition[955]: INFO : Ignition 2.19.0 May 13 00:21:54.062117 ignition[955]: INFO : Stage: files May 13 00:21:54.064228 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:21:54.064228 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:21:54.064228 ignition[955]: DEBUG : files: compiled without relabeling support, skipping May 13 00:21:54.064228 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:21:54.064228 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:21:54.071923 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:21:54.071923 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:21:54.071923 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:21:54.071923 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 13 00:21:54.071923 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 13 00:21:54.067009 unknown[955]: wrote ssh authorized keys file for user: core May 13 00:21:54.189131 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 00:21:54.396198 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 13 00:21:54.396198 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 00:21:54.400603 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 13 00:21:54.871258 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 00:21:55.089969 systemd-networkd[779]: eth0: Gained IPv6LL May 13 00:21:55.097049 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 00:21:55.099515 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 00:21:55.099515 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:21:55.099515 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 00:21:55.099515 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 00:21:55.099515 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:21:55.099515 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:21:55.099515 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:21:55.099515 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:21:55.099515 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:21:55.099515 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:21:55.099515 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 00:21:55.099515 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 00:21:55.099515 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 00:21:55.099515 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 13 00:21:55.395989 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 00:21:55.851052 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 00:21:55.851052 ignition[955]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 13 00:21:55.854894 ignition[955]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:21:55.857113 ignition[955]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:21:55.857113 ignition[955]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 13 00:21:55.857113 ignition[955]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 13 00:21:55.861521 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:21:55.863444 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:21:55.863444 ignition[955]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 13 00:21:55.866558 ignition[955]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:21:55.888158 ignition[955]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:21:55.893790 ignition[955]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:21:55.895545 ignition[955]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:21:55.895545 ignition[955]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 13 00:21:55.898372 ignition[955]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 13 00:21:55.899828 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:21:55.901605 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:21:55.903347 ignition[955]: INFO : files: files passed May 13 00:21:55.904150 ignition[955]: INFO : Ignition finished successfully May 13 00:21:55.907955 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 00:21:55.920932 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 00:21:55.921862 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 00:21:55.929461 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:21:55.929586 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 00:21:55.934602 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory May 13 00:21:55.938616 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:21:55.938616 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 00:21:55.941896 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:21:55.945611 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:21:55.947053 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 00:21:55.959954 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 00:21:55.984173 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:21:55.984314 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 00:21:55.985432 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 00:21:55.988127 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 00:21:55.988499 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 00:21:55.989351 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 00:21:56.009031 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:21:56.020056 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 00:21:56.031758 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 00:21:56.032355 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:21:56.034542 systemd[1]: Stopped target timers.target - Timer Units. May 13 00:21:56.034862 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:21:56.035001 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:21:56.040059 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 00:21:56.040637 systemd[1]: Stopped target basic.target - Basic System. May 13 00:21:56.041152 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 00:21:56.041478 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:21:56.041817 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 00:21:56.042152 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 00:21:56.042476 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:21:56.042829 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 00:21:56.043155 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 00:21:56.043475 systemd[1]: Stopped target swap.target - Swaps. May 13 00:21:56.043770 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:21:56.043910 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 00:21:56.061477 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 00:21:56.062243 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:21:56.062525 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 00:21:56.067397 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:21:56.069931 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:21:56.070091 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 00:21:56.072850 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:21:56.073014 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:21:56.073647 systemd[1]: Stopped target paths.target - Path Units. May 13 00:21:56.076464 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:21:56.080912 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:21:56.083869 systemd[1]: Stopped target slices.target - Slice Units. May 13 00:21:56.084937 systemd[1]: Stopped target sockets.target - Socket Units. May 13 00:21:56.086833 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:21:56.086954 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:21:56.088765 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:21:56.088909 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:21:56.090651 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:21:56.090834 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:21:56.092736 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:21:56.092863 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 00:21:56.103946 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 00:21:56.105528 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 00:21:56.106626 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:21:56.106757 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:21:56.109054 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:21:56.109240 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:21:56.115773 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:21:56.115913 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 00:21:56.120122 ignition[1009]: INFO : Ignition 2.19.0 May 13 00:21:56.120122 ignition[1009]: INFO : Stage: umount May 13 00:21:56.120122 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:21:56.120122 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:21:56.120122 ignition[1009]: INFO : umount: umount passed May 13 00:21:56.120122 ignition[1009]: INFO : Ignition finished successfully May 13 00:21:56.121202 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:21:56.121339 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 00:21:56.123472 systemd[1]: Stopped target network.target - Network. May 13 00:21:56.124781 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:21:56.124884 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 00:21:56.127010 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:21:56.127068 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 00:21:56.128992 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:21:56.129043 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 00:21:56.130950 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 00:21:56.131008 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 00:21:56.133136 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 00:21:56.135251 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 00:21:56.138384 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:21:56.138878 systemd-networkd[779]: eth0: DHCPv6 lease lost May 13 00:21:56.141335 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:21:56.141462 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 00:21:56.143439 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:21:56.143674 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 00:21:56.146366 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:21:56.146425 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 00:21:56.154869 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 00:21:56.156645 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:21:56.156699 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:21:56.159007 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:21:56.159058 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 00:21:56.161084 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:21:56.161131 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 00:21:56.163252 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 00:21:56.163298 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:21:56.164692 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:21:56.177501 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:21:56.177631 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 00:21:56.182593 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:21:56.182830 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:21:56.183543 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:21:56.183593 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 00:21:56.186380 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:21:56.186422 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:21:56.186669 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:21:56.186720 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 00:21:56.191401 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:21:56.191471 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 00:21:56.192034 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:21:56.192079 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:21:56.212965 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 00:21:56.214126 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 00:21:56.214185 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:21:56.216559 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 00:21:56.216610 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:21:56.217877 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:21:56.217925 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:21:56.220246 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:21:56.220294 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:21:56.223033 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:21:56.223144 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 00:21:56.581432 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:21:56.581590 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 00:21:56.584090 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 00:21:56.585355 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:21:56.585418 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 00:21:56.593920 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 00:21:56.600902 systemd[1]: Switching root. May 13 00:21:56.639860 systemd-journald[190]: Journal stopped May 13 00:21:59.523881 systemd-journald[190]: Received SIGTERM from PID 1 (systemd). May 13 00:21:59.523958 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:21:59.523972 kernel: SELinux: policy capability open_perms=1 May 13 00:21:59.523983 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:21:59.523998 kernel: SELinux: policy capability always_check_network=0 May 13 00:21:59.524908 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:21:59.524932 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:21:59.524945 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:21:59.524956 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:21:59.524967 kernel: audit: type=1403 audit(1747095718.598:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 00:21:59.524985 systemd[1]: Successfully loaded SELinux policy in 46.728ms. May 13 00:21:59.525009 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.529ms. May 13 00:21:59.525025 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:21:59.525038 systemd[1]: Detected virtualization kvm. May 13 00:21:59.525050 systemd[1]: Detected architecture x86-64. May 13 00:21:59.525061 systemd[1]: Detected first boot. May 13 00:21:59.525078 systemd[1]: Initializing machine ID from VM UUID. May 13 00:21:59.525090 zram_generator::config[1056]: No configuration found. May 13 00:21:59.525103 systemd[1]: Populated /etc with preset unit settings. May 13 00:21:59.525115 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 00:21:59.525127 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 00:21:59.525141 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 00:21:59.525153 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 00:21:59.525165 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 00:21:59.525177 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 00:21:59.525188 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 00:21:59.525200 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 00:21:59.525212 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 00:21:59.525225 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 00:21:59.525239 systemd[1]: Created slice user.slice - User and Session Slice. May 13 00:21:59.525250 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:21:59.525262 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:21:59.525274 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 00:21:59.525286 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 00:21:59.525298 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 00:21:59.525310 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:21:59.525321 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 00:21:59.525333 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:21:59.525347 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 00:21:59.525363 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 00:21:59.525375 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 00:21:59.525387 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 00:21:59.525398 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:21:59.525410 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:21:59.525422 systemd[1]: Reached target slices.target - Slice Units. May 13 00:21:59.525433 systemd[1]: Reached target swap.target - Swaps. May 13 00:21:59.525448 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 00:21:59.525460 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 00:21:59.525472 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:21:59.525484 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:21:59.525496 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:21:59.525508 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 00:21:59.525520 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 00:21:59.525532 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 00:21:59.525543 systemd[1]: Mounting media.mount - External Media Directory... May 13 00:21:59.525557 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:21:59.525569 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 00:21:59.525581 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 00:21:59.525593 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 00:21:59.525605 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:21:59.525616 systemd[1]: Reached target machines.target - Containers. May 13 00:21:59.525628 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 00:21:59.525640 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:21:59.525654 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:21:59.525666 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 00:21:59.525678 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:21:59.525689 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:21:59.525701 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:21:59.525717 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 00:21:59.525729 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:21:59.525742 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:21:59.525754 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 00:21:59.525768 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 00:21:59.525779 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 00:21:59.525791 systemd[1]: Stopped systemd-fsck-usr.service. May 13 00:21:59.525986 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:21:59.525998 kernel: fuse: init (API version 7.39) May 13 00:21:59.526009 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:21:59.526021 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 00:21:59.526033 kernel: loop: module loaded May 13 00:21:59.526044 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 00:21:59.526060 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:21:59.526071 systemd[1]: verity-setup.service: Deactivated successfully. May 13 00:21:59.526083 systemd[1]: Stopped verity-setup.service. May 13 00:21:59.526095 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:21:59.526107 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 00:21:59.526140 systemd-journald[1119]: Collecting audit messages is disabled. May 13 00:21:59.526161 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 00:21:59.526176 systemd[1]: Mounted media.mount - External Media Directory. May 13 00:21:59.526188 systemd-journald[1119]: Journal started May 13 00:21:59.526209 systemd-journald[1119]: Runtime Journal (/run/log/journal/af040fcf5b9146f59a30cfd0f6ab7759) is 6.0M, max 48.3M, 42.2M free. May 13 00:21:59.526251 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 00:21:59.259582 systemd[1]: Queued start job for default target multi-user.target. May 13 00:21:59.279908 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 00:21:59.280399 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 00:21:59.528560 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:21:59.530955 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 00:21:59.532196 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 00:21:59.533516 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:21:59.535120 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:21:59.535296 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 00:21:59.536812 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:21:59.536998 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:21:59.538426 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:21:59.538599 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:21:59.540143 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:21:59.540313 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 00:21:59.541962 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:21:59.542130 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:21:59.543819 kernel: ACPI: bus type drm_connector registered May 13 00:21:59.544378 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:21:59.545921 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:21:59.546096 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:21:59.547931 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 00:21:59.549606 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 00:21:59.563518 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 00:21:59.572942 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 00:21:59.577415 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 00:21:59.578696 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:21:59.578739 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:21:59.580962 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 13 00:21:59.585063 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 00:21:59.588673 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 00:21:59.589916 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:21:59.663983 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 00:21:59.680575 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 00:21:59.683019 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:21:59.685348 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 00:21:59.688412 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:21:59.690609 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:21:59.697818 systemd-journald[1119]: Time spent on flushing to /var/log/journal/af040fcf5b9146f59a30cfd0f6ab7759 is 13.073ms for 994 entries. May 13 00:21:59.697818 systemd-journald[1119]: System Journal (/var/log/journal/af040fcf5b9146f59a30cfd0f6ab7759) is 8.0M, max 195.6M, 187.6M free. May 13 00:22:00.355904 systemd-journald[1119]: Received client request to flush runtime journal. May 13 00:22:00.355960 kernel: loop0: detected capacity change from 0 to 142488 May 13 00:22:00.355977 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:22:00.355991 kernel: loop1: detected capacity change from 0 to 140768 May 13 00:22:00.356004 kernel: loop2: detected capacity change from 0 to 218376 May 13 00:22:00.356016 kernel: loop3: detected capacity change from 0 to 142488 May 13 00:22:00.356038 kernel: loop4: detected capacity change from 0 to 140768 May 13 00:22:00.356051 kernel: loop5: detected capacity change from 0 to 218376 May 13 00:22:00.356064 zram_generator::config[1210]: No configuration found. May 13 00:21:59.693975 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 00:22:00.356217 ldconfig[1158]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:21:59.698931 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:21:59.702055 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:21:59.703603 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 00:21:59.705008 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 00:21:59.706467 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 00:21:59.716271 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 00:21:59.726260 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:21:59.734087 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. May 13 00:21:59.734102 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. May 13 00:21:59.734170 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 00:21:59.740361 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:22:00.023366 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 00:22:00.023922 (sd-merge)[1184]: Merged extensions into '/usr'. May 13 00:22:00.029338 systemd[1]: Reloading requested from client PID 1163 ('systemd-sysext') (unit systemd-sysext.service)... May 13 00:22:00.029348 systemd[1]: Reloading... May 13 00:22:00.254374 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:22:00.303608 systemd[1]: Reloading finished in 273 ms. May 13 00:22:00.348214 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 00:22:00.349847 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 00:22:00.351252 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 00:22:00.352699 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 00:22:00.358816 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 00:22:00.363251 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 00:22:00.373979 systemd[1]: Starting ensure-sysext.service... May 13 00:22:00.375885 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 13 00:22:00.378949 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 00:22:00.384809 systemd[1]: Reloading requested from client PID 1254 ('systemctl') (unit ensure-sysext.service)... May 13 00:22:00.384827 systemd[1]: Reloading... May 13 00:22:00.441827 zram_generator::config[1283]: No configuration found. May 13 00:22:01.145672 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:22:01.197150 systemd[1]: Reloading finished in 811 ms. May 13 00:22:01.215489 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 00:22:01.242998 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:22:01.245306 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:22:01.249536 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:22:01.249751 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:22:01.251083 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:22:01.262634 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:22:01.269341 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:22:01.270547 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:22:01.270752 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:22:01.271673 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:22:01.272147 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:22:01.273917 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:22:01.274151 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:22:01.275852 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. May 13 00:22:01.275873 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. May 13 00:22:01.275884 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:22:01.276054 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:22:01.282622 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:22:01.285171 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:22:01.285494 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:22:01.287820 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:22:01.288140 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 00:22:01.289049 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:22:01.289317 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. May 13 00:22:01.289391 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. May 13 00:22:01.292659 systemd-tmpfiles[1322]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:22:01.292669 systemd-tmpfiles[1322]: Skipping /boot May 13 00:22:01.294104 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:22:01.296501 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:22:01.298656 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:22:01.299768 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:22:01.300008 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:22:01.300967 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:22:01.301150 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:22:01.306261 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:22:01.306449 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:22:01.315001 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:22:01.315171 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:22:01.316612 systemd-tmpfiles[1322]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:22:01.316625 systemd-tmpfiles[1322]: Skipping /boot May 13 00:22:01.318551 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:22:01.318746 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:22:01.332006 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:22:01.336563 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:22:01.337779 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:22:01.337907 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:22:01.337982 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:22:01.338609 systemd[1]: Finished ensure-sysext.service. May 13 00:22:01.339934 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:22:01.340121 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:22:01.345107 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:22:01.345316 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:22:01.346819 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:22:01.359053 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 00:22:01.412986 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 00:22:01.415622 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 00:22:01.417164 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:22:01.420079 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:22:01.425953 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 00:22:01.431100 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 00:22:01.435631 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 00:22:01.448280 augenrules[1366]: No rules May 13 00:22:01.450364 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 00:22:01.454332 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 00:22:01.469525 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 00:22:01.480039 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:22:01.502963 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 00:22:01.504595 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 00:22:01.516053 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 00:22:01.521339 systemd-udevd[1374]: Using default interface naming scheme 'v255'. May 13 00:22:01.527879 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 00:22:01.551534 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 00:22:01.553714 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:22:01.555028 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:22:01.567977 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:22:01.594327 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 00:22:01.613650 systemd-resolved[1357]: Positive Trust Anchors: May 13 00:22:01.614374 systemd-resolved[1357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:22:01.614465 systemd-resolved[1357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:22:01.621878 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 00:22:01.623766 systemd[1]: Reached target time-set.target - System Time Set. May 13 00:22:01.628481 systemd-resolved[1357]: Defaulting to hostname 'linux'. May 13 00:22:01.630404 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:22:01.632019 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1389) May 13 00:22:01.632727 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:22:01.657821 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 13 00:22:01.662172 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 13 00:22:01.662429 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 13 00:22:01.661955 systemd-networkd[1391]: lo: Link UP May 13 00:22:01.661968 systemd-networkd[1391]: lo: Gained carrier May 13 00:22:01.666455 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 13 00:22:01.666684 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 13 00:22:01.664427 systemd-networkd[1391]: Enumeration completed May 13 00:22:01.664546 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:22:01.666489 systemd[1]: Reached target network.target - Network. May 13 00:22:01.667854 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:22:01.667858 systemd-networkd[1391]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:22:01.668702 systemd-networkd[1391]: eth0: Link UP May 13 00:22:01.668714 systemd-networkd[1391]: eth0: Gained carrier May 13 00:22:01.668725 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:22:01.669842 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 13 00:22:01.672848 kernel: ACPI: button: Power Button [PWRF] May 13 00:22:01.674935 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 00:22:01.679044 systemd-networkd[1391]: eth0: DHCPv4 address 10.0.0.45/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:22:01.680593 systemd-timesyncd[1360]: Network configuration changed, trying to establish connection. May 13 00:22:02.407796 systemd-timesyncd[1360]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:22:02.407838 systemd-timesyncd[1360]: Initial clock synchronization to Tue 2025-05-13 00:22:02.407644 UTC. May 13 00:22:02.407908 systemd-resolved[1357]: Clock change detected. Flushing caches. May 13 00:22:02.437948 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:22:02.439949 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 13 00:22:02.464324 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:22:02.477194 kernel: mousedev: PS/2 mouse device common for all mice May 13 00:22:02.524615 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 00:22:02.529537 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:22:02.536220 kernel: kvm_amd: TSC scaling supported May 13 00:22:02.536282 kernel: kvm_amd: Nested Virtualization enabled May 13 00:22:02.536332 kernel: kvm_amd: Nested Paging enabled May 13 00:22:02.536361 kernel: kvm_amd: LBR virtualization supported May 13 00:22:02.536385 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 13 00:22:02.536415 kernel: kvm_amd: Virtual GIF supported May 13 00:22:02.550512 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 00:22:02.563214 kernel: EDAC MC: Ver: 3.0.0 May 13 00:22:02.594371 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:22:02.608388 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 00:22:02.622356 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 00:22:02.630041 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:22:02.669388 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 00:22:02.670891 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:22:02.674290 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:22:02.675475 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 00:22:02.676748 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 00:22:02.678248 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 00:22:02.679631 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 00:22:02.680935 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 00:22:02.682351 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:22:02.682382 systemd[1]: Reached target paths.target - Path Units. May 13 00:22:02.683308 systemd[1]: Reached target timers.target - Timer Units. May 13 00:22:02.684801 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 00:22:02.687610 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 00:22:02.697898 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 00:22:02.700640 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 00:22:02.702316 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 00:22:02.703506 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:22:02.704480 systemd[1]: Reached target basic.target - Basic System. May 13 00:22:02.705473 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 00:22:02.705500 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 00:22:02.706667 systemd[1]: Starting containerd.service - containerd container runtime... May 13 00:22:02.708819 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 00:22:02.711271 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:22:02.713331 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 00:22:02.718770 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 00:22:02.720086 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 00:22:02.722504 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 00:22:02.728236 jq[1440]: false May 13 00:22:02.726233 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 00:22:02.730522 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 00:22:02.735965 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 00:22:02.743999 dbus-daemon[1439]: [system] SELinux support is enabled May 13 00:22:02.744393 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 00:22:02.745995 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:22:02.746571 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 00:22:02.747284 systemd[1]: Starting update-engine.service - Update Engine... May 13 00:22:02.749354 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 00:22:02.751795 extend-filesystems[1441]: Found loop3 May 13 00:22:02.751795 extend-filesystems[1441]: Found loop4 May 13 00:22:02.751795 extend-filesystems[1441]: Found loop5 May 13 00:22:02.751795 extend-filesystems[1441]: Found sr0 May 13 00:22:02.751795 extend-filesystems[1441]: Found vda May 13 00:22:02.751795 extend-filesystems[1441]: Found vda1 May 13 00:22:02.751795 extend-filesystems[1441]: Found vda2 May 13 00:22:02.751795 extend-filesystems[1441]: Found vda3 May 13 00:22:02.751795 extend-filesystems[1441]: Found usr May 13 00:22:02.751795 extend-filesystems[1441]: Found vda4 May 13 00:22:02.751795 extend-filesystems[1441]: Found vda6 May 13 00:22:02.751795 extend-filesystems[1441]: Found vda7 May 13 00:22:02.751795 extend-filesystems[1441]: Found vda9 May 13 00:22:02.751795 extend-filesystems[1441]: Checking size of /dev/vda9 May 13 00:22:02.758210 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 00:22:02.767023 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 00:22:02.768241 jq[1455]: true May 13 00:22:02.770573 update_engine[1453]: I20250513 00:22:02.770234 1453 main.cc:92] Flatcar Update Engine starting May 13 00:22:02.771815 update_engine[1453]: I20250513 00:22:02.771660 1453 update_check_scheduler.cc:74] Next update check in 11m57s May 13 00:22:02.782780 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:22:02.783025 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 00:22:02.783277 extend-filesystems[1441]: Resized partition /dev/vda9 May 13 00:22:02.784324 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:22:02.784530 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 00:22:02.785578 extend-filesystems[1463]: resize2fs 1.47.1 (20-May-2024) May 13 00:22:02.787814 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:22:02.788012 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 00:22:02.795222 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:22:02.799952 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1390) May 13 00:22:02.806467 jq[1465]: true May 13 00:22:02.814641 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 00:22:02.817693 systemd[1]: Started update-engine.service - Update Engine. May 13 00:22:02.819304 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:22:02.819336 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 00:22:02.829955 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:22:02.829977 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 00:22:02.832553 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 00:22:02.851874 sshd_keygen[1459]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:22:02.877108 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 00:22:02.887524 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 00:22:02.895122 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:22:02.895420 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 00:22:02.898907 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 00:22:02.954521 tar[1464]: linux-amd64/LICENSE May 13 00:22:02.958202 tar[1464]: linux-amd64/helm May 13 00:22:02.956386 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 00:22:02.969410 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 00:22:02.971304 systemd-logind[1452]: Watching system buttons on /dev/input/event1 (Power Button) May 13 00:22:02.971338 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 00:22:02.972227 systemd-logind[1452]: New seat seat0. May 13 00:22:02.979282 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:22:02.980506 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 00:22:02.981901 systemd[1]: Reached target getty.target - Login Prompts. May 13 00:22:02.983052 systemd[1]: Started systemd-logind.service - User Login Management. May 13 00:22:03.091217 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:22:03.309582 extend-filesystems[1463]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:22:03.309582 extend-filesystems[1463]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:22:03.309582 extend-filesystems[1463]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:22:03.314807 extend-filesystems[1441]: Resized filesystem in /dev/vda9 May 13 00:22:03.310588 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:22:03.310810 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 00:22:03.322250 containerd[1466]: time="2025-05-13T00:22:03.321805103Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 13 00:22:03.342961 containerd[1466]: time="2025-05-13T00:22:03.342637339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:22:03.345047 containerd[1466]: time="2025-05-13T00:22:03.344996774Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:22:03.345090 containerd[1466]: time="2025-05-13T00:22:03.345047730Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:22:03.345090 containerd[1466]: time="2025-05-13T00:22:03.345068048Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:22:03.345328 containerd[1466]: time="2025-05-13T00:22:03.345308960Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 13 00:22:03.345351 containerd[1466]: time="2025-05-13T00:22:03.345329348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 13 00:22:03.345418 containerd[1466]: time="2025-05-13T00:22:03.345399660Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:22:03.345418 containerd[1466]: time="2025-05-13T00:22:03.345415980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:22:03.345834 containerd[1466]: time="2025-05-13T00:22:03.345806152Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:22:03.345834 containerd[1466]: time="2025-05-13T00:22:03.345825258Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:22:03.345875 containerd[1466]: time="2025-05-13T00:22:03.345839014Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:22:03.345875 containerd[1466]: time="2025-05-13T00:22:03.345849534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:22:03.345971 containerd[1466]: time="2025-05-13T00:22:03.345948129Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:22:03.346224 containerd[1466]: time="2025-05-13T00:22:03.346204760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:22:03.346357 containerd[1466]: time="2025-05-13T00:22:03.346330656Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:22:03.346357 containerd[1466]: time="2025-05-13T00:22:03.346348991Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:22:03.346459 containerd[1466]: time="2025-05-13T00:22:03.346442596Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:22:03.346514 containerd[1466]: time="2025-05-13T00:22:03.346498681Z" level=info msg="metadata content store policy set" policy=shared May 13 00:22:03.404221 bash[1505]: Updated "/home/core/.ssh/authorized_keys" May 13 00:22:03.406330 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 00:22:03.408681 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 00:22:03.451110 containerd[1466]: time="2025-05-13T00:22:03.451033308Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:22:03.451110 containerd[1466]: time="2025-05-13T00:22:03.451098220Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:22:03.451110 containerd[1466]: time="2025-05-13T00:22:03.451113318Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 13 00:22:03.451110 containerd[1466]: time="2025-05-13T00:22:03.451130010Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 13 00:22:03.451344 containerd[1466]: time="2025-05-13T00:22:03.451143475Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:22:03.451344 containerd[1466]: time="2025-05-13T00:22:03.451334974Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:22:03.451643 containerd[1466]: time="2025-05-13T00:22:03.451609509Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:22:03.451753 containerd[1466]: time="2025-05-13T00:22:03.451720077Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 13 00:22:03.451753 containerd[1466]: time="2025-05-13T00:22:03.451740966Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 13 00:22:03.451753 containerd[1466]: time="2025-05-13T00:22:03.451752568Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 13 00:22:03.451833 containerd[1466]: time="2025-05-13T00:22:03.451765321Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:22:03.451833 containerd[1466]: time="2025-05-13T00:22:03.451777174Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:22:03.451833 containerd[1466]: time="2025-05-13T00:22:03.451790539Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:22:03.451833 containerd[1466]: time="2025-05-13T00:22:03.451802842Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:22:03.451833 containerd[1466]: time="2025-05-13T00:22:03.451815946Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:22:03.451833 containerd[1466]: time="2025-05-13T00:22:03.451827718Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:22:03.451985 containerd[1466]: time="2025-05-13T00:22:03.451839180Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:22:03.451985 containerd[1466]: time="2025-05-13T00:22:03.451851283Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:22:03.451985 containerd[1466]: time="2025-05-13T00:22:03.451870188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:22:03.451985 containerd[1466]: time="2025-05-13T00:22:03.451883553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:22:03.451985 containerd[1466]: time="2025-05-13T00:22:03.451895746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:22:03.451985 containerd[1466]: time="2025-05-13T00:22:03.451907388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:22:03.451985 containerd[1466]: time="2025-05-13T00:22:03.451919721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:22:03.451985 containerd[1466]: time="2025-05-13T00:22:03.451931503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:22:03.451985 containerd[1466]: time="2025-05-13T00:22:03.451944307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:22:03.451985 containerd[1466]: time="2025-05-13T00:22:03.451955899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:22:03.451985 containerd[1466]: time="2025-05-13T00:22:03.451967461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 13 00:22:03.451985 containerd[1466]: time="2025-05-13T00:22:03.451980906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 13 00:22:03.451985 containerd[1466]: time="2025-05-13T00:22:03.451991836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:22:03.452350 containerd[1466]: time="2025-05-13T00:22:03.452003318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 13 00:22:03.452350 containerd[1466]: time="2025-05-13T00:22:03.452016222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:22:03.452350 containerd[1466]: time="2025-05-13T00:22:03.452035238Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 13 00:22:03.452350 containerd[1466]: time="2025-05-13T00:22:03.452059333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 13 00:22:03.452350 containerd[1466]: time="2025-05-13T00:22:03.452073610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:22:03.452350 containerd[1466]: time="2025-05-13T00:22:03.452083849Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:22:03.452350 containerd[1466]: time="2025-05-13T00:22:03.452133552Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:22:03.452350 containerd[1466]: time="2025-05-13T00:22:03.452151696Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 13 00:22:03.452350 containerd[1466]: time="2025-05-13T00:22:03.452164150Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:22:03.452350 containerd[1466]: time="2025-05-13T00:22:03.452174920Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 13 00:22:03.452350 containerd[1466]: time="2025-05-13T00:22:03.452222739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:22:03.452350 containerd[1466]: time="2025-05-13T00:22:03.452256092Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 13 00:22:03.452350 containerd[1466]: time="2025-05-13T00:22:03.452265780Z" level=info msg="NRI interface is disabled by configuration." May 13 00:22:03.452350 containerd[1466]: time="2025-05-13T00:22:03.452275368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:22:03.452706 containerd[1466]: time="2025-05-13T00:22:03.452508315Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:22:03.452706 containerd[1466]: time="2025-05-13T00:22:03.452556415Z" level=info msg="Connect containerd service" May 13 00:22:03.452706 containerd[1466]: time="2025-05-13T00:22:03.452588105Z" level=info msg="using legacy CRI server" May 13 00:22:03.452706 containerd[1466]: time="2025-05-13T00:22:03.452596601Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 00:22:03.452706 containerd[1466]: time="2025-05-13T00:22:03.452668085Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:22:03.453268 containerd[1466]: time="2025-05-13T00:22:03.453227394Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:22:03.453901 containerd[1466]: time="2025-05-13T00:22:03.453502440Z" level=info msg="Start subscribing containerd event" May 13 00:22:03.453901 containerd[1466]: time="2025-05-13T00:22:03.453600945Z" level=info msg="Start recovering state" May 13 00:22:03.453901 containerd[1466]: time="2025-05-13T00:22:03.453543577Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:22:03.453901 containerd[1466]: time="2025-05-13T00:22:03.453702856Z" level=info msg="Start event monitor" May 13 00:22:03.453901 containerd[1466]: time="2025-05-13T00:22:03.453731019Z" level=info msg="Start snapshots syncer" May 13 00:22:03.453901 containerd[1466]: time="2025-05-13T00:22:03.453745866Z" level=info msg="Start cni network conf syncer for default" May 13 00:22:03.453901 containerd[1466]: time="2025-05-13T00:22:03.453756055Z" level=info msg="Start streaming server" May 13 00:22:03.453901 containerd[1466]: time="2025-05-13T00:22:03.453712003Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:22:03.453965 systemd[1]: Started containerd.service - containerd container runtime. May 13 00:22:03.455251 containerd[1466]: time="2025-05-13T00:22:03.455152405Z" level=info msg="containerd successfully booted in 0.134880s" May 13 00:22:03.576132 tar[1464]: linux-amd64/README.md May 13 00:22:03.593246 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 00:22:04.392368 systemd-networkd[1391]: eth0: Gained IPv6LL May 13 00:22:04.395491 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 00:22:04.397508 systemd[1]: Reached target network-online.target - Network is Online. May 13 00:22:04.408473 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 00:22:04.410897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:22:04.413090 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 00:22:04.430666 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 00:22:04.430939 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 00:22:04.432529 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 00:22:04.434057 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 00:22:04.691484 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 00:22:04.693867 systemd[1]: Started sshd@0-10.0.0.45:22-10.0.0.1:40144.service - OpenSSH per-connection server daemon (10.0.0.1:40144). May 13 00:22:04.737436 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 40144 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:22:04.739517 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:04.747624 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 00:22:04.758432 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 00:22:04.762579 systemd-logind[1452]: New session 1 of user core. May 13 00:22:04.771293 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 00:22:04.782469 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 00:22:04.786612 (systemd)[1552]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:22:04.896533 systemd[1552]: Queued start job for default target default.target. May 13 00:22:04.908550 systemd[1552]: Created slice app.slice - User Application Slice. May 13 00:22:04.908581 systemd[1552]: Reached target paths.target - Paths. May 13 00:22:04.908599 systemd[1552]: Reached target timers.target - Timers. May 13 00:22:04.910495 systemd[1552]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 00:22:04.922427 systemd[1552]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 00:22:04.922581 systemd[1552]: Reached target sockets.target - Sockets. May 13 00:22:04.922598 systemd[1552]: Reached target basic.target - Basic System. May 13 00:22:04.922638 systemd[1552]: Reached target default.target - Main User Target. May 13 00:22:04.922677 systemd[1552]: Startup finished in 129ms. May 13 00:22:04.923162 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 00:22:04.934344 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 00:22:04.992963 systemd[1]: Started sshd@1-10.0.0.45:22-10.0.0.1:40156.service - OpenSSH per-connection server daemon (10.0.0.1:40156). May 13 00:22:05.032092 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 40156 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:22:05.034118 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:05.038932 systemd-logind[1452]: New session 2 of user core. May 13 00:22:05.048348 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 00:22:05.104205 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:22:05.106202 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 00:22:05.109074 sshd[1563]: pam_unix(sshd:session): session closed for user core May 13 00:22:05.109212 systemd[1]: Startup finished in 748ms (kernel) + 7.891s (initrd) + 5.828s (userspace) = 14.469s. May 13 00:22:05.112112 (kubelet)[1572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:22:05.113395 systemd[1]: sshd@1-10.0.0.45:22-10.0.0.1:40156.service: Deactivated successfully. May 13 00:22:05.115820 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:22:05.116817 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. May 13 00:22:05.119602 systemd[1]: Started sshd@2-10.0.0.45:22-10.0.0.1:40168.service - OpenSSH per-connection server daemon (10.0.0.1:40168). May 13 00:22:05.123281 systemd-logind[1452]: Removed session 2. May 13 00:22:05.160367 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 40168 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:22:05.162078 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:05.166399 systemd-logind[1452]: New session 3 of user core. May 13 00:22:05.180295 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 00:22:05.234348 sshd[1578]: pam_unix(sshd:session): session closed for user core May 13 00:22:05.237006 systemd[1]: sshd@2-10.0.0.45:22-10.0.0.1:40168.service: Deactivated successfully. May 13 00:22:05.238892 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:22:05.240279 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. May 13 00:22:05.241051 systemd-logind[1452]: Removed session 3. May 13 00:22:05.516603 kubelet[1572]: E0513 00:22:05.516556 1572 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:22:05.520687 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:22:05.520892 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:22:15.246307 systemd[1]: Started sshd@3-10.0.0.45:22-10.0.0.1:54230.service - OpenSSH per-connection server daemon (10.0.0.1:54230). May 13 00:22:15.282882 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 54230 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:22:15.284733 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:15.288965 systemd-logind[1452]: New session 4 of user core. May 13 00:22:15.298357 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 00:22:15.352130 sshd[1594]: pam_unix(sshd:session): session closed for user core May 13 00:22:15.365875 systemd[1]: sshd@3-10.0.0.45:22-10.0.0.1:54230.service: Deactivated successfully. May 13 00:22:15.367432 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:22:15.368799 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. May 13 00:22:15.370025 systemd[1]: Started sshd@4-10.0.0.45:22-10.0.0.1:54238.service - OpenSSH per-connection server daemon (10.0.0.1:54238). May 13 00:22:15.370748 systemd-logind[1452]: Removed session 4. May 13 00:22:15.417148 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 54238 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:22:15.419030 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:15.423254 systemd-logind[1452]: New session 5 of user core. May 13 00:22:15.443471 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 00:22:15.494104 sshd[1601]: pam_unix(sshd:session): session closed for user core May 13 00:22:15.515671 systemd[1]: sshd@4-10.0.0.45:22-10.0.0.1:54238.service: Deactivated successfully. May 13 00:22:15.518119 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:22:15.520047 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. May 13 00:22:15.534711 systemd[1]: Started sshd@5-10.0.0.45:22-10.0.0.1:54240.service - OpenSSH per-connection server daemon (10.0.0.1:54240). May 13 00:22:15.535990 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 00:22:15.537812 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:22:15.538142 systemd-logind[1452]: Removed session 5. May 13 00:22:15.563410 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 54240 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:22:15.564890 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:15.572142 systemd-logind[1452]: New session 6 of user core. May 13 00:22:15.573487 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 00:22:15.629573 sshd[1608]: pam_unix(sshd:session): session closed for user core May 13 00:22:15.639864 systemd[1]: sshd@5-10.0.0.45:22-10.0.0.1:54240.service: Deactivated successfully. May 13 00:22:15.641442 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:22:15.642949 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. May 13 00:22:15.644134 systemd[1]: Started sshd@6-10.0.0.45:22-10.0.0.1:54250.service - OpenSSH per-connection server daemon (10.0.0.1:54250). May 13 00:22:15.647670 systemd-logind[1452]: Removed session 6. May 13 00:22:15.691953 sshd[1618]: Accepted publickey for core from 10.0.0.1 port 54250 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:22:15.692681 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:15.696311 systemd-logind[1452]: New session 7 of user core. May 13 00:22:15.704291 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 00:22:15.719466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:22:15.723684 (kubelet)[1626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:22:15.765626 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 00:22:15.766066 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:22:15.772028 kubelet[1626]: E0513 00:22:15.771849 1626 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:22:15.779392 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:22:15.779594 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:22:15.784974 sudo[1633]: pam_unix(sudo:session): session closed for user root May 13 00:22:15.787333 sshd[1618]: pam_unix(sshd:session): session closed for user core May 13 00:22:15.799020 systemd[1]: sshd@6-10.0.0.45:22-10.0.0.1:54250.service: Deactivated successfully. May 13 00:22:15.800596 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:22:15.802074 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. May 13 00:22:15.813400 systemd[1]: Started sshd@7-10.0.0.45:22-10.0.0.1:54264.service - OpenSSH per-connection server daemon (10.0.0.1:54264). May 13 00:22:15.814438 systemd-logind[1452]: Removed session 7. May 13 00:22:15.842372 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 54264 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:22:15.844008 sshd[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:15.847901 systemd-logind[1452]: New session 8 of user core. May 13 00:22:15.857288 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 00:22:15.910928 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 00:22:15.911348 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:22:15.914779 sudo[1644]: pam_unix(sudo:session): session closed for user root May 13 00:22:15.920735 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 13 00:22:15.921078 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:22:15.939462 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 13 00:22:15.941281 auditctl[1647]: No rules May 13 00:22:15.942497 systemd[1]: audit-rules.service: Deactivated successfully. May 13 00:22:15.942761 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 13 00:22:15.944684 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 00:22:15.976769 augenrules[1665]: No rules May 13 00:22:15.978964 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 00:22:15.980219 sudo[1643]: pam_unix(sudo:session): session closed for user root May 13 00:22:15.982158 sshd[1640]: pam_unix(sshd:session): session closed for user core May 13 00:22:15.993100 systemd[1]: sshd@7-10.0.0.45:22-10.0.0.1:54264.service: Deactivated successfully. May 13 00:22:15.994672 systemd[1]: session-8.scope: Deactivated successfully. May 13 00:22:15.996216 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. May 13 00:22:15.997536 systemd[1]: Started sshd@8-10.0.0.45:22-10.0.0.1:54274.service - OpenSSH per-connection server daemon (10.0.0.1:54274). May 13 00:22:15.998357 systemd-logind[1452]: Removed session 8. May 13 00:22:16.031112 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 54274 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:22:16.032920 sshd[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:16.036725 systemd-logind[1452]: New session 9 of user core. May 13 00:22:16.046374 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 00:22:16.100712 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:22:16.101129 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:22:16.397539 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 00:22:16.397789 (dockerd)[1694]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 00:22:16.666164 dockerd[1694]: time="2025-05-13T00:22:16.666000470Z" level=info msg="Starting up" May 13 00:22:17.561860 dockerd[1694]: time="2025-05-13T00:22:17.561802869Z" level=info msg="Loading containers: start." May 13 00:22:17.736205 kernel: Initializing XFRM netlink socket May 13 00:22:17.810157 systemd-networkd[1391]: docker0: Link UP May 13 00:22:17.829757 dockerd[1694]: time="2025-05-13T00:22:17.829655434Z" level=info msg="Loading containers: done." May 13 00:22:17.844057 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1990301426-merged.mount: Deactivated successfully. May 13 00:22:17.846850 dockerd[1694]: time="2025-05-13T00:22:17.846810421Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 00:22:17.846934 dockerd[1694]: time="2025-05-13T00:22:17.846913074Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 13 00:22:17.847035 dockerd[1694]: time="2025-05-13T00:22:17.847018021Z" level=info msg="Daemon has completed initialization" May 13 00:22:17.881719 dockerd[1694]: time="2025-05-13T00:22:17.881637788Z" level=info msg="API listen on /run/docker.sock" May 13 00:22:17.881956 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 00:22:18.633396 containerd[1466]: time="2025-05-13T00:22:18.633346518Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 13 00:22:19.339506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount206354771.mount: Deactivated successfully. May 13 00:22:21.136351 containerd[1466]: time="2025-05-13T00:22:21.136296858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:21.137326 containerd[1466]: time="2025-05-13T00:22:21.137280633Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" May 13 00:22:21.138805 containerd[1466]: time="2025-05-13T00:22:21.138744389Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:21.143567 containerd[1466]: time="2025-05-13T00:22:21.143517732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:21.144947 containerd[1466]: time="2025-05-13T00:22:21.144889635Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 2.511503723s" May 13 00:22:21.145005 containerd[1466]: time="2025-05-13T00:22:21.144956010Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 13 00:22:21.145619 containerd[1466]: time="2025-05-13T00:22:21.145595970Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 13 00:22:23.067452 containerd[1466]: time="2025-05-13T00:22:23.067373736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:23.089247 containerd[1466]: time="2025-05-13T00:22:23.089137248Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" May 13 00:22:23.102999 containerd[1466]: time="2025-05-13T00:22:23.102948334Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:23.122305 containerd[1466]: time="2025-05-13T00:22:23.122243256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:23.123378 containerd[1466]: time="2025-05-13T00:22:23.123338871Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 1.977708395s" May 13 00:22:23.123480 containerd[1466]: time="2025-05-13T00:22:23.123378725Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 13 00:22:23.124290 containerd[1466]: time="2025-05-13T00:22:23.124265248Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 13 00:22:25.861158 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 00:22:25.872483 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:22:26.032031 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:22:26.037730 (kubelet)[1912]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:22:26.096953 kubelet[1912]: E0513 00:22:26.096891 1912 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:22:26.101796 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:22:26.102160 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:22:27.213809 containerd[1466]: time="2025-05-13T00:22:27.213730817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:27.234667 containerd[1466]: time="2025-05-13T00:22:27.234559596Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" May 13 00:22:27.258143 containerd[1466]: time="2025-05-13T00:22:27.258073441Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:27.280709 containerd[1466]: time="2025-05-13T00:22:27.280643315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:27.281534 containerd[1466]: time="2025-05-13T00:22:27.281493019Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 4.157197985s" May 13 00:22:27.281602 containerd[1466]: time="2025-05-13T00:22:27.281531351Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 13 00:22:27.282143 containerd[1466]: time="2025-05-13T00:22:27.282102743Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 13 00:22:29.182555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount790100766.mount: Deactivated successfully. May 13 00:22:30.349443 containerd[1466]: time="2025-05-13T00:22:30.349348630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:30.349987 containerd[1466]: time="2025-05-13T00:22:30.349950579Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 13 00:22:30.351112 containerd[1466]: time="2025-05-13T00:22:30.351077202Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:30.353101 containerd[1466]: time="2025-05-13T00:22:30.353046325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:30.353681 containerd[1466]: time="2025-05-13T00:22:30.353642183Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 3.071504936s" May 13 00:22:30.353681 containerd[1466]: time="2025-05-13T00:22:30.353674844Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 13 00:22:30.354227 containerd[1466]: time="2025-05-13T00:22:30.354168330Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 13 00:22:31.092931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3281567139.mount: Deactivated successfully. May 13 00:22:32.132272 containerd[1466]: time="2025-05-13T00:22:32.132205289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:32.133106 containerd[1466]: time="2025-05-13T00:22:32.133032851Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 13 00:22:32.134310 containerd[1466]: time="2025-05-13T00:22:32.134274059Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:32.137589 containerd[1466]: time="2025-05-13T00:22:32.137551667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:32.138583 containerd[1466]: time="2025-05-13T00:22:32.138542586Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.784329011s" May 13 00:22:32.138583 containerd[1466]: time="2025-05-13T00:22:32.138572622Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 13 00:22:32.139027 containerd[1466]: time="2025-05-13T00:22:32.138986068Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 00:22:32.736480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount357587223.mount: Deactivated successfully. May 13 00:22:32.741803 containerd[1466]: time="2025-05-13T00:22:32.741753066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:32.742529 containerd[1466]: time="2025-05-13T00:22:32.742478627Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 13 00:22:32.744019 containerd[1466]: time="2025-05-13T00:22:32.743992417Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:32.746215 containerd[1466]: time="2025-05-13T00:22:32.746171043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:32.746949 containerd[1466]: time="2025-05-13T00:22:32.746906062Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 607.892202ms" May 13 00:22:32.746949 containerd[1466]: time="2025-05-13T00:22:32.746941619Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 13 00:22:32.747456 containerd[1466]: time="2025-05-13T00:22:32.747410037Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 13 00:22:33.352417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3694250803.mount: Deactivated successfully. May 13 00:22:35.520897 containerd[1466]: time="2025-05-13T00:22:35.520837734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:35.521662 containerd[1466]: time="2025-05-13T00:22:35.521603260Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 13 00:22:35.522932 containerd[1466]: time="2025-05-13T00:22:35.522899109Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:35.526222 containerd[1466]: time="2025-05-13T00:22:35.526172232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:35.528232 containerd[1466]: time="2025-05-13T00:22:35.528195395Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.780732857s" May 13 00:22:35.530113 containerd[1466]: time="2025-05-13T00:22:35.528287954Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 13 00:22:36.111025 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 13 00:22:36.120442 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:22:36.282759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:22:36.286890 (kubelet)[2071]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:22:36.333942 kubelet[2071]: E0513 00:22:36.333822 2071 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:22:36.338298 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:22:36.338561 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:22:37.560920 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:22:37.570531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:22:37.593811 systemd[1]: Reloading requested from client PID 2086 ('systemctl') (unit session-9.scope)... May 13 00:22:37.593828 systemd[1]: Reloading... May 13 00:22:37.671224 zram_generator::config[2125]: No configuration found. May 13 00:22:38.675614 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:22:38.754556 systemd[1]: Reloading finished in 1160 ms. May 13 00:22:38.805832 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 00:22:38.805926 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 00:22:38.806236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:22:38.809049 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:22:38.974116 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:22:38.978544 (kubelet)[2174]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:22:39.027468 kubelet[2174]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:22:39.027468 kubelet[2174]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 00:22:39.027468 kubelet[2174]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:22:39.027860 kubelet[2174]: I0513 00:22:39.027587 2174 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:22:39.461625 kubelet[2174]: I0513 00:22:39.461579 2174 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 00:22:39.462592 kubelet[2174]: I0513 00:22:39.461777 2174 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:22:39.462592 kubelet[2174]: I0513 00:22:39.462337 2174 server.go:954] "Client rotation is on, will bootstrap in background" May 13 00:22:39.485131 kubelet[2174]: E0513 00:22:39.485067 2174 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" May 13 00:22:39.487685 kubelet[2174]: I0513 00:22:39.487652 2174 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:22:39.493556 kubelet[2174]: E0513 00:22:39.493526 2174 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 00:22:39.493556 kubelet[2174]: I0513 00:22:39.493556 2174 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 00:22:39.498828 kubelet[2174]: I0513 00:22:39.498797 2174 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:22:39.499989 kubelet[2174]: I0513 00:22:39.499944 2174 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:22:39.500149 kubelet[2174]: I0513 00:22:39.499978 2174 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:22:39.500149 kubelet[2174]: I0513 00:22:39.500148 2174 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:22:39.500355 kubelet[2174]: I0513 00:22:39.500160 2174 container_manager_linux.go:304] "Creating device plugin manager" May 13 00:22:39.500355 kubelet[2174]: I0513 00:22:39.500323 2174 state_mem.go:36] "Initialized new in-memory state store" May 13 00:22:39.502808 kubelet[2174]: I0513 00:22:39.502776 2174 kubelet.go:446] "Attempting to sync node with API server" May 13 00:22:39.502808 kubelet[2174]: I0513 00:22:39.502801 2174 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:22:39.502884 kubelet[2174]: I0513 00:22:39.502820 2174 kubelet.go:352] "Adding apiserver pod source" May 13 00:22:39.502884 kubelet[2174]: I0513 00:22:39.502832 2174 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:22:39.508074 kubelet[2174]: I0513 00:22:39.507755 2174 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 00:22:39.509158 kubelet[2174]: I0513 00:22:39.508314 2174 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:22:39.509158 kubelet[2174]: W0513 00:22:39.508422 2174 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:22:39.509158 kubelet[2174]: W0513 00:22:39.508897 2174 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused May 13 00:22:39.509158 kubelet[2174]: E0513 00:22:39.508947 2174 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" May 13 00:22:39.511620 kubelet[2174]: I0513 00:22:39.510932 2174 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 00:22:39.511620 kubelet[2174]: I0513 00:22:39.510974 2174 server.go:1287] "Started kubelet" May 13 00:22:39.514146 kubelet[2174]: I0513 00:22:39.514103 2174 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:22:39.514550 kubelet[2174]: I0513 00:22:39.514503 2174 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:22:39.515477 kubelet[2174]: I0513 00:22:39.515447 2174 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:22:39.516959 kubelet[2174]: I0513 00:22:39.516841 2174 server.go:490] "Adding debug handlers to kubelet server" May 13 00:22:39.517033 kubelet[2174]: W0513 00:22:39.517004 2174 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused May 13 00:22:39.517076 kubelet[2174]: E0513 00:22:39.517039 2174 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" May 13 00:22:39.518302 kubelet[2174]: I0513 00:22:39.517250 2174 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:22:39.518302 kubelet[2174]: I0513 00:22:39.518057 2174 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:22:39.518412 kubelet[2174]: I0513 00:22:39.518357 2174 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 00:22:39.518443 kubelet[2174]: I0513 00:22:39.518424 2174 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:22:39.518476 kubelet[2174]: I0513 00:22:39.518460 2174 reconciler.go:26] "Reconciler: start to sync state" May 13 00:22:39.519614 kubelet[2174]: W0513 00:22:39.518710 2174 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused May 13 00:22:39.519614 kubelet[2174]: E0513 00:22:39.518744 2174 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" May 13 00:22:39.519785 kubelet[2174]: E0513 00:22:39.518250 2174 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.45:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.45:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eee56363dd66c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:22:39.510951532 +0000 UTC m=+0.528592322,LastTimestamp:2025-05-13 00:22:39.510951532 +0000 UTC m=+0.528592322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:22:39.520267 kubelet[2174]: I0513 00:22:39.520250 2174 factory.go:221] Registration of the systemd container factory successfully May 13 00:22:39.520416 kubelet[2174]: I0513 00:22:39.520395 2174 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:22:39.520930 kubelet[2174]: E0513 00:22:39.520880 2174 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:22:39.521062 kubelet[2174]: E0513 00:22:39.520989 2174 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="200ms" May 13 00:22:39.521125 kubelet[2174]: E0513 00:22:39.521102 2174 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:22:39.521842 kubelet[2174]: I0513 00:22:39.521816 2174 factory.go:221] Registration of the containerd container factory successfully May 13 00:22:39.536142 kubelet[2174]: I0513 00:22:39.535990 2174 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:22:39.537767 kubelet[2174]: I0513 00:22:39.537743 2174 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:22:39.537767 kubelet[2174]: I0513 00:22:39.537766 2174 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 00:22:39.537862 kubelet[2174]: I0513 00:22:39.537782 2174 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 00:22:39.537862 kubelet[2174]: I0513 00:22:39.537788 2174 kubelet.go:2388] "Starting kubelet main sync loop" May 13 00:22:39.537862 kubelet[2174]: E0513 00:22:39.537831 2174 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:22:39.538770 kubelet[2174]: W0513 00:22:39.538418 2174 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused May 13 00:22:39.538770 kubelet[2174]: E0513 00:22:39.538462 2174 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" May 13 00:22:39.543961 kubelet[2174]: I0513 00:22:39.543927 2174 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 00:22:39.543961 kubelet[2174]: I0513 00:22:39.543944 2174 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 00:22:39.543961 kubelet[2174]: I0513 00:22:39.543960 2174 state_mem.go:36] "Initialized new in-memory state store" May 13 00:22:39.621502 kubelet[2174]: E0513 00:22:39.621462 2174 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:22:39.638757 kubelet[2174]: E0513 00:22:39.638700 2174 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:22:39.721623 kubelet[2174]: E0513 00:22:39.721534 2174 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:22:39.721681 kubelet[2174]: E0513 00:22:39.721607 2174 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="400ms" May 13 00:22:39.822062 kubelet[2174]: E0513 00:22:39.822033 2174 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:22:39.839244 kubelet[2174]: E0513 00:22:39.839213 2174 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:22:39.922812 kubelet[2174]: E0513 00:22:39.922742 2174 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:22:39.964825 kubelet[2174]: I0513 00:22:39.964778 2174 policy_none.go:49] "None policy: Start" May 13 00:22:39.964825 kubelet[2174]: I0513 00:22:39.964818 2174 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 00:22:39.964825 kubelet[2174]: I0513 00:22:39.964833 2174 state_mem.go:35] "Initializing new in-memory state store" May 13 00:22:40.023000 kubelet[2174]: E0513 00:22:40.022841 2174 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:22:40.122780 kubelet[2174]: E0513 00:22:40.122731 2174 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="800ms" May 13 00:22:40.123718 kubelet[2174]: E0513 00:22:40.123684 2174 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:22:40.224484 kubelet[2174]: E0513 00:22:40.224405 2174 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:22:40.239699 kubelet[2174]: E0513 00:22:40.239630 2174 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:22:40.325473 kubelet[2174]: E0513 00:22:40.325316 2174 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:22:40.393080 kubelet[2174]: W0513 00:22:40.393016 2174 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused May 13 00:22:40.393080 kubelet[2174]: E0513 00:22:40.393064 2174 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" May 13 00:22:40.429429 kubelet[2174]: E0513 00:22:40.429370 2174 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:22:40.529852 kubelet[2174]: E0513 00:22:40.529799 2174 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:22:40.630575 kubelet[2174]: E0513 00:22:40.630526 2174 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:22:40.710370 kubelet[2174]: W0513 00:22:40.710323 2174 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused May 13 00:22:40.710475 kubelet[2174]: E0513 00:22:40.710388 2174 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" May 13 00:22:40.731286 kubelet[2174]: E0513 00:22:40.731234 2174 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:22:40.767588 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 00:22:40.787334 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 00:22:40.791212 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 00:22:40.801318 kubelet[2174]: I0513 00:22:40.801273 2174 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:22:40.801552 kubelet[2174]: I0513 00:22:40.801519 2174 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:22:40.801597 kubelet[2174]: I0513 00:22:40.801537 2174 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:22:40.801842 kubelet[2174]: I0513 00:22:40.801753 2174 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:22:40.802940 kubelet[2174]: E0513 00:22:40.802907 2174 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 00:22:40.803072 kubelet[2174]: E0513 00:22:40.802965 2174 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 00:22:40.903472 kubelet[2174]: I0513 00:22:40.903314 2174 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:22:40.903978 kubelet[2174]: E0513 00:22:40.903924 2174 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" May 13 00:22:40.923984 kubelet[2174]: E0513 00:22:40.923937 2174 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="1.6s" May 13 00:22:40.967639 kubelet[2174]: W0513 00:22:40.967596 2174 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused May 13 00:22:40.967639 kubelet[2174]: E0513 00:22:40.967646 2174 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" May 13 00:22:41.047985 systemd[1]: Created slice kubepods-burstable-pod15ec462c333880080d46a0218c9049b9.slice - libcontainer container kubepods-burstable-pod15ec462c333880080d46a0218c9049b9.slice. May 13 00:22:41.073041 kubelet[2174]: E0513 00:22:41.073011 2174 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:22:41.076947 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 13 00:22:41.078698 kubelet[2174]: E0513 00:22:41.078673 2174 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:22:41.096518 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 13 00:22:41.098301 kubelet[2174]: E0513 00:22:41.098269 2174 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:22:41.098989 kubelet[2174]: W0513 00:22:41.098948 2174 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused May 13 00:22:41.099033 kubelet[2174]: E0513 00:22:41.099002 2174 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" May 13 00:22:41.105422 kubelet[2174]: I0513 00:22:41.105379 2174 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:22:41.105783 kubelet[2174]: E0513 00:22:41.105747 2174 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" May 13 00:22:41.132295 kubelet[2174]: I0513 00:22:41.132228 2174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:22:41.132295 kubelet[2174]: I0513 00:22:41.132294 2174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:22:41.132759 kubelet[2174]: I0513 00:22:41.132320 2174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:22:41.132759 kubelet[2174]: I0513 00:22:41.132346 2174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:22:41.132759 kubelet[2174]: I0513 00:22:41.132370 2174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/15ec462c333880080d46a0218c9049b9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"15ec462c333880080d46a0218c9049b9\") " pod="kube-system/kube-apiserver-localhost" May 13 00:22:41.132759 kubelet[2174]: I0513 00:22:41.132390 2174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/15ec462c333880080d46a0218c9049b9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"15ec462c333880080d46a0218c9049b9\") " pod="kube-system/kube-apiserver-localhost" May 13 00:22:41.132759 kubelet[2174]: I0513 00:22:41.132422 2174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 00:22:41.132869 kubelet[2174]: I0513 00:22:41.132472 2174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/15ec462c333880080d46a0218c9049b9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"15ec462c333880080d46a0218c9049b9\") " pod="kube-system/kube-apiserver-localhost" May 13 00:22:41.132869 kubelet[2174]: I0513 00:22:41.132510 2174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:22:41.373896 kubelet[2174]: E0513 00:22:41.373847 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:41.374871 containerd[1466]: time="2025-05-13T00:22:41.374813142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:15ec462c333880080d46a0218c9049b9,Namespace:kube-system,Attempt:0,}" May 13 00:22:41.379980 kubelet[2174]: E0513 00:22:41.379953 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:41.380390 containerd[1466]: time="2025-05-13T00:22:41.380356300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 13 00:22:41.399772 kubelet[2174]: E0513 00:22:41.399723 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:41.400385 containerd[1466]: time="2025-05-13T00:22:41.400346265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 13 00:22:41.507745 kubelet[2174]: I0513 00:22:41.507708 2174 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:22:41.508034 kubelet[2174]: E0513 00:22:41.508012 2174 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" May 13 00:22:41.595069 kubelet[2174]: E0513 00:22:41.595009 2174 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" May 13 00:22:42.190925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3623114814.mount: Deactivated successfully. May 13 00:22:42.198414 containerd[1466]: time="2025-05-13T00:22:42.198340663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:22:42.199341 containerd[1466]: time="2025-05-13T00:22:42.199301548Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:22:42.200271 containerd[1466]: time="2025-05-13T00:22:42.200204472Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:22:42.201149 containerd[1466]: time="2025-05-13T00:22:42.201117916Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:22:42.201874 containerd[1466]: time="2025-05-13T00:22:42.201839284Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 13 00:22:42.202920 containerd[1466]: time="2025-05-13T00:22:42.202827591Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:22:42.203843 containerd[1466]: time="2025-05-13T00:22:42.203808804Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:22:42.207723 containerd[1466]: time="2025-05-13T00:22:42.207697069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:22:42.208641 containerd[1466]: time="2025-05-13T00:22:42.208605354Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 828.175493ms" May 13 00:22:42.210118 containerd[1466]: time="2025-05-13T00:22:42.210090780Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 835.17911ms" May 13 00:22:42.211637 containerd[1466]: time="2025-05-13T00:22:42.211503107Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 811.067282ms" May 13 00:22:42.310034 kubelet[2174]: I0513 00:22:42.309994 2174 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:22:42.310483 kubelet[2174]: E0513 00:22:42.310385 2174 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" May 13 00:22:42.358486 containerd[1466]: time="2025-05-13T00:22:42.358108807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:22:42.358486 containerd[1466]: time="2025-05-13T00:22:42.358163402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:22:42.358486 containerd[1466]: time="2025-05-13T00:22:42.358201123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:42.358486 containerd[1466]: time="2025-05-13T00:22:42.358274193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:42.358762 containerd[1466]: time="2025-05-13T00:22:42.358698563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:22:42.358801 containerd[1466]: time="2025-05-13T00:22:42.358774469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:22:42.358986 containerd[1466]: time="2025-05-13T00:22:42.358810828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:42.358986 containerd[1466]: time="2025-05-13T00:22:42.358882726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:42.359170 containerd[1466]: time="2025-05-13T00:22:42.358956797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:22:42.359170 containerd[1466]: time="2025-05-13T00:22:42.359013475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:22:42.359170 containerd[1466]: time="2025-05-13T00:22:42.359033303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:42.359170 containerd[1466]: time="2025-05-13T00:22:42.359115389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:42.389339 systemd[1]: Started cri-containerd-edca83560ddf05567f7e8e5a174b595f71bde8be6b2a941dad54ce9436590d41.scope - libcontainer container edca83560ddf05567f7e8e5a174b595f71bde8be6b2a941dad54ce9436590d41. May 13 00:22:42.393469 systemd[1]: Started cri-containerd-68d4509de6e92b5d534e174302ac3e844876e990ced02ba57da932f0d2c5151b.scope - libcontainer container 68d4509de6e92b5d534e174302ac3e844876e990ced02ba57da932f0d2c5151b. May 13 00:22:42.394970 systemd[1]: Started cri-containerd-835fce367b35d79f9e608465c69c44eee78de683779121b02f432cb51ecdd384.scope - libcontainer container 835fce367b35d79f9e608465c69c44eee78de683779121b02f432cb51ecdd384. May 13 00:22:42.431066 containerd[1466]: time="2025-05-13T00:22:42.431017395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:15ec462c333880080d46a0218c9049b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"edca83560ddf05567f7e8e5a174b595f71bde8be6b2a941dad54ce9436590d41\"" May 13 00:22:42.432422 kubelet[2174]: E0513 00:22:42.432379 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:42.434604 containerd[1466]: time="2025-05-13T00:22:42.434567775Z" level=info msg="CreateContainer within sandbox \"edca83560ddf05567f7e8e5a174b595f71bde8be6b2a941dad54ce9436590d41\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 00:22:42.435578 containerd[1466]: time="2025-05-13T00:22:42.435494163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"835fce367b35d79f9e608465c69c44eee78de683779121b02f432cb51ecdd384\"" May 13 00:22:42.436493 kubelet[2174]: E0513 00:22:42.436464 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:42.439377 containerd[1466]: time="2025-05-13T00:22:42.439313467Z" level=info msg="CreateContainer within sandbox \"835fce367b35d79f9e608465c69c44eee78de683779121b02f432cb51ecdd384\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 00:22:42.443639 containerd[1466]: time="2025-05-13T00:22:42.443541380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"68d4509de6e92b5d534e174302ac3e844876e990ced02ba57da932f0d2c5151b\"" May 13 00:22:42.444207 kubelet[2174]: E0513 00:22:42.444154 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:42.446880 containerd[1466]: time="2025-05-13T00:22:42.446828216Z" level=info msg="CreateContainer within sandbox \"68d4509de6e92b5d534e174302ac3e844876e990ced02ba57da932f0d2c5151b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 00:22:42.452548 containerd[1466]: time="2025-05-13T00:22:42.452499335Z" level=info msg="CreateContainer within sandbox \"edca83560ddf05567f7e8e5a174b595f71bde8be6b2a941dad54ce9436590d41\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d967060b41fecd10897e336acfe062b3a8753d9929a8e220c66f33503301e0a6\"" May 13 00:22:42.452945 containerd[1466]: time="2025-05-13T00:22:42.452915059Z" level=info msg="StartContainer for \"d967060b41fecd10897e336acfe062b3a8753d9929a8e220c66f33503301e0a6\"" May 13 00:22:42.465621 containerd[1466]: time="2025-05-13T00:22:42.465548391Z" level=info msg="CreateContainer within sandbox \"835fce367b35d79f9e608465c69c44eee78de683779121b02f432cb51ecdd384\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6bbc7341e44e636c46f738120d3d8fe88b07130c2a01f493d560509639410ee1\"" May 13 00:22:42.466113 containerd[1466]: time="2025-05-13T00:22:42.466078323Z" level=info msg="StartContainer for \"6bbc7341e44e636c46f738120d3d8fe88b07130c2a01f493d560509639410ee1\"" May 13 00:22:42.469284 containerd[1466]: time="2025-05-13T00:22:42.469248126Z" level=info msg="CreateContainer within sandbox \"68d4509de6e92b5d534e174302ac3e844876e990ced02ba57da932f0d2c5151b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0cddcf8ad805a48f2f3fc09ab221468cb0f7b603ca2a3c9f0b98343a4182387e\"" May 13 00:22:42.469938 containerd[1466]: time="2025-05-13T00:22:42.469799729Z" level=info msg="StartContainer for \"0cddcf8ad805a48f2f3fc09ab221468cb0f7b603ca2a3c9f0b98343a4182387e\"" May 13 00:22:42.479359 systemd[1]: Started cri-containerd-d967060b41fecd10897e336acfe062b3a8753d9929a8e220c66f33503301e0a6.scope - libcontainer container d967060b41fecd10897e336acfe062b3a8753d9929a8e220c66f33503301e0a6. May 13 00:22:42.503436 systemd[1]: Started cri-containerd-6bbc7341e44e636c46f738120d3d8fe88b07130c2a01f493d560509639410ee1.scope - libcontainer container 6bbc7341e44e636c46f738120d3d8fe88b07130c2a01f493d560509639410ee1. May 13 00:22:42.509114 systemd[1]: Started cri-containerd-0cddcf8ad805a48f2f3fc09ab221468cb0f7b603ca2a3c9f0b98343a4182387e.scope - libcontainer container 0cddcf8ad805a48f2f3fc09ab221468cb0f7b603ca2a3c9f0b98343a4182387e. May 13 00:22:42.525442 kubelet[2174]: E0513 00:22:42.525167 2174 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="3.2s" May 13 00:22:42.538303 containerd[1466]: time="2025-05-13T00:22:42.538246767Z" level=info msg="StartContainer for \"d967060b41fecd10897e336acfe062b3a8753d9929a8e220c66f33503301e0a6\" returns successfully" May 13 00:22:42.544549 containerd[1466]: time="2025-05-13T00:22:42.544522541Z" level=info msg="StartContainer for \"6bbc7341e44e636c46f738120d3d8fe88b07130c2a01f493d560509639410ee1\" returns successfully" May 13 00:22:42.548234 kubelet[2174]: E0513 00:22:42.548204 2174 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:22:42.548320 kubelet[2174]: E0513 00:22:42.548313 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:42.550373 kubelet[2174]: E0513 00:22:42.550230 2174 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:22:42.550373 kubelet[2174]: E0513 00:22:42.550314 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:42.557638 containerd[1466]: time="2025-05-13T00:22:42.557518085Z" level=info msg="StartContainer for \"0cddcf8ad805a48f2f3fc09ab221468cb0f7b603ca2a3c9f0b98343a4182387e\" returns successfully" May 13 00:22:43.555735 kubelet[2174]: E0513 00:22:43.555694 2174 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:22:43.556129 kubelet[2174]: E0513 00:22:43.555769 2174 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:22:43.556129 kubelet[2174]: E0513 00:22:43.555832 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:43.556129 kubelet[2174]: E0513 00:22:43.555894 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:43.857327 kubelet[2174]: E0513 00:22:43.857209 2174 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 13 00:22:43.912056 kubelet[2174]: I0513 00:22:43.912023 2174 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:22:43.918749 kubelet[2174]: I0513 00:22:43.918716 2174 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 00:22:43.918749 kubelet[2174]: E0513 00:22:43.918740 2174 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 13 00:22:43.921408 kubelet[2174]: I0513 00:22:43.921394 2174 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 00:22:43.925557 kubelet[2174]: E0513 00:22:43.925536 2174 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 13 00:22:43.925557 kubelet[2174]: I0513 00:22:43.925557 2174 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 00:22:43.926677 kubelet[2174]: E0513 00:22:43.926660 2174 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 13 00:22:43.926677 kubelet[2174]: I0513 00:22:43.926676 2174 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 00:22:43.928294 kubelet[2174]: E0513 00:22:43.928262 2174 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 13 00:22:44.505303 kubelet[2174]: I0513 00:22:44.505261 2174 apiserver.go:52] "Watching apiserver" May 13 00:22:44.522293 kubelet[2174]: I0513 00:22:44.522252 2174 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:22:44.555854 kubelet[2174]: I0513 00:22:44.555825 2174 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 00:22:44.559391 kubelet[2174]: E0513 00:22:44.559346 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:45.416712 systemd[1]: Reloading requested from client PID 2455 ('systemctl') (unit session-9.scope)... May 13 00:22:45.416730 systemd[1]: Reloading... May 13 00:22:45.501231 zram_generator::config[2494]: No configuration found. May 13 00:22:45.557686 kubelet[2174]: E0513 00:22:45.557656 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:45.613098 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:22:45.707079 systemd[1]: Reloading finished in 289 ms. May 13 00:22:45.754863 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:22:45.777812 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:22:45.778119 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:22:45.778199 systemd[1]: kubelet.service: Consumed 1.054s CPU time, 126.5M memory peak, 0B memory swap peak. May 13 00:22:45.789662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:22:45.945132 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:22:45.950442 (kubelet)[2539]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:22:45.991858 kubelet[2539]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:22:45.991858 kubelet[2539]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 00:22:45.991858 kubelet[2539]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:22:45.992280 kubelet[2539]: I0513 00:22:45.991840 2539 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:22:45.998899 kubelet[2539]: I0513 00:22:45.998861 2539 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 00:22:45.998899 kubelet[2539]: I0513 00:22:45.998891 2539 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:22:45.999566 kubelet[2539]: I0513 00:22:45.999542 2539 server.go:954] "Client rotation is on, will bootstrap in background" May 13 00:22:46.000829 kubelet[2539]: I0513 00:22:46.000811 2539 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 00:22:46.002896 kubelet[2539]: I0513 00:22:46.002871 2539 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:22:46.005712 kubelet[2539]: E0513 00:22:46.005654 2539 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 00:22:46.005712 kubelet[2539]: I0513 00:22:46.005712 2539 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 00:22:46.011064 kubelet[2539]: I0513 00:22:46.011036 2539 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:22:46.011573 kubelet[2539]: I0513 00:22:46.011309 2539 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:22:46.011573 kubelet[2539]: I0513 00:22:46.011338 2539 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:22:46.011573 kubelet[2539]: I0513 00:22:46.011515 2539 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:22:46.011573 kubelet[2539]: I0513 00:22:46.011541 2539 container_manager_linux.go:304] "Creating device plugin manager" May 13 00:22:46.011755 kubelet[2539]: I0513 00:22:46.011581 2539 state_mem.go:36] "Initialized new in-memory state store" May 13 00:22:46.011755 kubelet[2539]: I0513 00:22:46.011741 2539 kubelet.go:446] "Attempting to sync node with API server" May 13 00:22:46.011800 kubelet[2539]: I0513 00:22:46.011763 2539 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:22:46.011800 kubelet[2539]: I0513 00:22:46.011785 2539 kubelet.go:352] "Adding apiserver pod source" May 13 00:22:46.011800 kubelet[2539]: I0513 00:22:46.011795 2539 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:22:46.013054 kubelet[2539]: I0513 00:22:46.012901 2539 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 00:22:46.013951 kubelet[2539]: I0513 00:22:46.013933 2539 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:22:46.016202 kubelet[2539]: I0513 00:22:46.014631 2539 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 00:22:46.016202 kubelet[2539]: I0513 00:22:46.014663 2539 server.go:1287] "Started kubelet" May 13 00:22:46.017660 kubelet[2539]: I0513 00:22:46.017645 2539 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:22:46.022562 kubelet[2539]: I0513 00:22:46.022518 2539 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:22:46.022892 kubelet[2539]: I0513 00:22:46.022677 2539 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:22:46.026063 kubelet[2539]: I0513 00:22:46.026031 2539 server.go:490] "Adding debug handlers to kubelet server" May 13 00:22:46.027228 kubelet[2539]: E0513 00:22:46.026551 2539 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:22:46.027228 kubelet[2539]: I0513 00:22:46.026851 2539 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:22:46.027228 kubelet[2539]: I0513 00:22:46.027089 2539 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:22:46.027517 kubelet[2539]: I0513 00:22:46.027497 2539 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 00:22:46.027754 kubelet[2539]: I0513 00:22:46.027738 2539 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:22:46.027872 kubelet[2539]: I0513 00:22:46.027855 2539 reconciler.go:26] "Reconciler: start to sync state" May 13 00:22:46.028162 kubelet[2539]: E0513 00:22:46.028142 2539 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:22:46.028875 kubelet[2539]: I0513 00:22:46.028845 2539 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:22:46.029720 kubelet[2539]: I0513 00:22:46.029705 2539 factory.go:221] Registration of the systemd container factory successfully May 13 00:22:46.029930 kubelet[2539]: I0513 00:22:46.029913 2539 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:22:46.031249 kubelet[2539]: I0513 00:22:46.031222 2539 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:22:46.031331 kubelet[2539]: I0513 00:22:46.031253 2539 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 00:22:46.031331 kubelet[2539]: I0513 00:22:46.031274 2539 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 00:22:46.031331 kubelet[2539]: I0513 00:22:46.031287 2539 kubelet.go:2388] "Starting kubelet main sync loop" May 13 00:22:46.031448 kubelet[2539]: E0513 00:22:46.031332 2539 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:22:46.031659 kubelet[2539]: I0513 00:22:46.031588 2539 factory.go:221] Registration of the containerd container factory successfully May 13 00:22:46.067033 kubelet[2539]: I0513 00:22:46.067005 2539 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 00:22:46.067951 kubelet[2539]: I0513 00:22:46.067206 2539 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 00:22:46.067951 kubelet[2539]: I0513 00:22:46.067231 2539 state_mem.go:36] "Initialized new in-memory state store" May 13 00:22:46.067951 kubelet[2539]: I0513 00:22:46.067395 2539 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 00:22:46.067951 kubelet[2539]: I0513 00:22:46.067405 2539 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 00:22:46.067951 kubelet[2539]: I0513 00:22:46.067434 2539 policy_none.go:49] "None policy: Start" May 13 00:22:46.067951 kubelet[2539]: I0513 00:22:46.067443 2539 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 00:22:46.067951 kubelet[2539]: I0513 00:22:46.067453 2539 state_mem.go:35] "Initializing new in-memory state store" May 13 00:22:46.067951 kubelet[2539]: I0513 00:22:46.067545 2539 state_mem.go:75] "Updated machine memory state" May 13 00:22:46.071710 kubelet[2539]: I0513 00:22:46.071683 2539 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:22:46.071975 kubelet[2539]: I0513 00:22:46.071857 2539 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:22:46.071975 kubelet[2539]: I0513 00:22:46.071875 2539 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:22:46.072101 kubelet[2539]: I0513 00:22:46.072045 2539 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:22:46.072611 kubelet[2539]: E0513 00:22:46.072579 2539 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 00:22:46.132913 kubelet[2539]: I0513 00:22:46.132851 2539 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 00:22:46.133064 kubelet[2539]: I0513 00:22:46.132929 2539 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 00:22:46.133064 kubelet[2539]: I0513 00:22:46.132884 2539 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 00:22:46.138489 kubelet[2539]: E0513 00:22:46.138455 2539 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 13 00:22:46.176972 kubelet[2539]: I0513 00:22:46.176923 2539 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:22:46.184507 kubelet[2539]: I0513 00:22:46.184459 2539 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 13 00:22:46.184663 kubelet[2539]: I0513 00:22:46.184559 2539 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 00:22:46.228224 kubelet[2539]: I0513 00:22:46.228127 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/15ec462c333880080d46a0218c9049b9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"15ec462c333880080d46a0218c9049b9\") " pod="kube-system/kube-apiserver-localhost" May 13 00:22:46.228224 kubelet[2539]: I0513 00:22:46.228165 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/15ec462c333880080d46a0218c9049b9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"15ec462c333880080d46a0218c9049b9\") " pod="kube-system/kube-apiserver-localhost" May 13 00:22:46.328732 kubelet[2539]: I0513 00:22:46.328583 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:22:46.328732 kubelet[2539]: I0513 00:22:46.328637 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:22:46.328732 kubelet[2539]: I0513 00:22:46.328728 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 00:22:46.328949 kubelet[2539]: I0513 00:22:46.328752 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:22:46.328949 kubelet[2539]: I0513 00:22:46.328778 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:22:46.328949 kubelet[2539]: I0513 00:22:46.328849 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/15ec462c333880080d46a0218c9049b9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"15ec462c333880080d46a0218c9049b9\") " pod="kube-system/kube-apiserver-localhost" May 13 00:22:46.328949 kubelet[2539]: I0513 00:22:46.328875 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:22:46.418742 sudo[2577]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 00:22:46.419157 sudo[2577]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 13 00:22:46.439022 kubelet[2539]: E0513 00:22:46.438879 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:46.439022 kubelet[2539]: E0513 00:22:46.438915 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:46.439022 kubelet[2539]: E0513 00:22:46.438964 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:46.877679 sudo[2577]: pam_unix(sudo:session): session closed for user root May 13 00:22:47.012857 kubelet[2539]: I0513 00:22:47.012824 2539 apiserver.go:52] "Watching apiserver" May 13 00:22:47.027938 kubelet[2539]: I0513 00:22:47.027902 2539 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:22:47.046798 kubelet[2539]: E0513 00:22:47.046751 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:47.047868 kubelet[2539]: I0513 00:22:47.047680 2539 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 00:22:47.052624 kubelet[2539]: I0513 00:22:47.052587 2539 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 00:22:47.053831 kubelet[2539]: E0513 00:22:47.053713 2539 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 00:22:47.055907 kubelet[2539]: E0513 00:22:47.055592 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:47.059956 kubelet[2539]: E0513 00:22:47.059931 2539 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 13 00:22:47.060125 kubelet[2539]: E0513 00:22:47.060090 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:47.081988 kubelet[2539]: I0513 00:22:47.081916 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.081896547 podStartE2EDuration="1.081896547s" podCreationTimestamp="2025-05-13 00:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:22:47.075082659 +0000 UTC m=+1.120472199" watchObservedRunningTime="2025-05-13 00:22:47.081896547 +0000 UTC m=+1.127286087" May 13 00:22:47.087658 kubelet[2539]: I0513 00:22:47.087312 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.08729286 podStartE2EDuration="3.08729286s" podCreationTimestamp="2025-05-13 00:22:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:22:47.087270899 +0000 UTC m=+1.132660439" watchObservedRunningTime="2025-05-13 00:22:47.08729286 +0000 UTC m=+1.132682401" May 13 00:22:47.087658 kubelet[2539]: I0513 00:22:47.087385 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.087380237 podStartE2EDuration="1.087380237s" podCreationTimestamp="2025-05-13 00:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:22:47.082145359 +0000 UTC m=+1.127534900" watchObservedRunningTime="2025-05-13 00:22:47.087380237 +0000 UTC m=+1.132769777" May 13 00:22:47.641124 update_engine[1453]: I20250513 00:22:47.641049 1453 update_attempter.cc:509] Updating boot flags... May 13 00:22:48.049515 kubelet[2539]: E0513 00:22:48.049093 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:48.049515 kubelet[2539]: E0513 00:22:48.049164 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:48.049515 kubelet[2539]: E0513 00:22:48.049353 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:48.097208 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2603) May 13 00:22:48.116763 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2605) May 13 00:22:48.147211 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2605) May 13 00:22:48.639398 sudo[1676]: pam_unix(sudo:session): session closed for user root May 13 00:22:48.641638 sshd[1673]: pam_unix(sshd:session): session closed for user core May 13 00:22:48.645834 systemd[1]: sshd@8-10.0.0.45:22-10.0.0.1:54274.service: Deactivated successfully. May 13 00:22:48.647773 systemd[1]: session-9.scope: Deactivated successfully. May 13 00:22:48.648003 systemd[1]: session-9.scope: Consumed 3.961s CPU time, 157.3M memory peak, 0B memory swap peak. May 13 00:22:48.648457 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. May 13 00:22:48.649326 systemd-logind[1452]: Removed session 9. May 13 00:22:49.309941 kubelet[2539]: E0513 00:22:49.309888 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:51.607468 kubelet[2539]: I0513 00:22:51.607435 2539 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 00:22:51.608267 containerd[1466]: time="2025-05-13T00:22:51.608228187Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:22:51.608723 kubelet[2539]: I0513 00:22:51.608419 2539 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 00:22:52.521519 systemd[1]: Created slice kubepods-besteffort-pode2a274b6_199f_44c2_b97d_8b6173fc1ffa.slice - libcontainer container kubepods-besteffort-pode2a274b6_199f_44c2_b97d_8b6173fc1ffa.slice. May 13 00:22:52.534603 systemd[1]: Created slice kubepods-burstable-pod758e91be_2be5_424c_bf4e_391282fb8246.slice - libcontainer container kubepods-burstable-pod758e91be_2be5_424c_bf4e_391282fb8246.slice. May 13 00:22:52.572434 kubelet[2539]: I0513 00:22:52.572389 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-cilium-run\") pod \"cilium-9r966\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " pod="kube-system/cilium-9r966" May 13 00:22:52.572434 kubelet[2539]: I0513 00:22:52.572429 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-bpf-maps\") pod \"cilium-9r966\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " pod="kube-system/cilium-9r966" May 13 00:22:52.572434 kubelet[2539]: I0513 00:22:52.572450 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-cni-path\") pod \"cilium-9r966\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " pod="kube-system/cilium-9r966" May 13 00:22:52.572661 kubelet[2539]: I0513 00:22:52.572508 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-lib-modules\") pod \"cilium-9r966\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " pod="kube-system/cilium-9r966" May 13 00:22:52.572661 kubelet[2539]: I0513 00:22:52.572526 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/758e91be-2be5-424c-bf4e-391282fb8246-cilium-config-path\") pod \"cilium-9r966\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " pod="kube-system/cilium-9r966" May 13 00:22:52.572661 kubelet[2539]: I0513 00:22:52.572542 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-etc-cni-netd\") pod \"cilium-9r966\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " pod="kube-system/cilium-9r966" May 13 00:22:52.572661 kubelet[2539]: I0513 00:22:52.572558 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-host-proc-sys-kernel\") pod \"cilium-9r966\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " pod="kube-system/cilium-9r966" May 13 00:22:52.572661 kubelet[2539]: I0513 00:22:52.572604 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-hostproc\") pod \"cilium-9r966\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " pod="kube-system/cilium-9r966" May 13 00:22:52.572661 kubelet[2539]: I0513 00:22:52.572618 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-xtables-lock\") pod \"cilium-9r966\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " pod="kube-system/cilium-9r966" May 13 00:22:52.572814 kubelet[2539]: I0513 00:22:52.572632 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/758e91be-2be5-424c-bf4e-391282fb8246-hubble-tls\") pod \"cilium-9r966\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " pod="kube-system/cilium-9r966" May 13 00:22:52.572814 kubelet[2539]: I0513 00:22:52.572657 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2a274b6-199f-44c2-b97d-8b6173fc1ffa-xtables-lock\") pod \"kube-proxy-vqfwd\" (UID: \"e2a274b6-199f-44c2-b97d-8b6173fc1ffa\") " pod="kube-system/kube-proxy-vqfwd" May 13 00:22:52.572814 kubelet[2539]: I0513 00:22:52.572675 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2a274b6-199f-44c2-b97d-8b6173fc1ffa-lib-modules\") pod \"kube-proxy-vqfwd\" (UID: \"e2a274b6-199f-44c2-b97d-8b6173fc1ffa\") " pod="kube-system/kube-proxy-vqfwd" May 13 00:22:52.572814 kubelet[2539]: I0513 00:22:52.572696 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e2a274b6-199f-44c2-b97d-8b6173fc1ffa-kube-proxy\") pod \"kube-proxy-vqfwd\" (UID: \"e2a274b6-199f-44c2-b97d-8b6173fc1ffa\") " pod="kube-system/kube-proxy-vqfwd" May 13 00:22:52.572814 kubelet[2539]: I0513 00:22:52.572724 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m54t\" (UniqueName: \"kubernetes.io/projected/e2a274b6-199f-44c2-b97d-8b6173fc1ffa-kube-api-access-4m54t\") pod \"kube-proxy-vqfwd\" (UID: \"e2a274b6-199f-44c2-b97d-8b6173fc1ffa\") " pod="kube-system/kube-proxy-vqfwd" May 13 00:22:52.572929 kubelet[2539]: I0513 00:22:52.572743 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-cilium-cgroup\") pod \"cilium-9r966\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " pod="kube-system/cilium-9r966" May 13 00:22:52.572929 kubelet[2539]: I0513 00:22:52.572769 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/758e91be-2be5-424c-bf4e-391282fb8246-clustermesh-secrets\") pod \"cilium-9r966\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " pod="kube-system/cilium-9r966" May 13 00:22:52.572929 kubelet[2539]: I0513 00:22:52.572799 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-host-proc-sys-net\") pod \"cilium-9r966\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " pod="kube-system/cilium-9r966" May 13 00:22:52.572929 kubelet[2539]: I0513 00:22:52.572815 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4drck\" (UniqueName: \"kubernetes.io/projected/758e91be-2be5-424c-bf4e-391282fb8246-kube-api-access-4drck\") pod \"cilium-9r966\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " pod="kube-system/cilium-9r966" May 13 00:22:52.845620 systemd[1]: Created slice kubepods-besteffort-pod41345458_9fde_43c2_bdf4_802f9ddb2c96.slice - libcontainer container kubepods-besteffort-pod41345458_9fde_43c2_bdf4_802f9ddb2c96.slice. May 13 00:22:52.875309 kubelet[2539]: I0513 00:22:52.875267 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tktkq\" (UniqueName: \"kubernetes.io/projected/41345458-9fde-43c2-bdf4-802f9ddb2c96-kube-api-access-tktkq\") pod \"cilium-operator-6c4d7847fc-2mvrr\" (UID: \"41345458-9fde-43c2-bdf4-802f9ddb2c96\") " pod="kube-system/cilium-operator-6c4d7847fc-2mvrr" May 13 00:22:52.875309 kubelet[2539]: I0513 00:22:52.875310 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41345458-9fde-43c2-bdf4-802f9ddb2c96-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-2mvrr\" (UID: \"41345458-9fde-43c2-bdf4-802f9ddb2c96\") " pod="kube-system/cilium-operator-6c4d7847fc-2mvrr" May 13 00:22:53.129830 kubelet[2539]: E0513 00:22:53.129781 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:53.130490 containerd[1466]: time="2025-05-13T00:22:53.130454348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vqfwd,Uid:e2a274b6-199f-44c2-b97d-8b6173fc1ffa,Namespace:kube-system,Attempt:0,}" May 13 00:22:53.137119 kubelet[2539]: E0513 00:22:53.137087 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:53.137664 containerd[1466]: time="2025-05-13T00:22:53.137623885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9r966,Uid:758e91be-2be5-424c-bf4e-391282fb8246,Namespace:kube-system,Attempt:0,}" May 13 00:22:53.148680 kubelet[2539]: E0513 00:22:53.148644 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:53.149297 containerd[1466]: time="2025-05-13T00:22:53.149256286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2mvrr,Uid:41345458-9fde-43c2-bdf4-802f9ddb2c96,Namespace:kube-system,Attempt:0,}" May 13 00:22:53.413748 containerd[1466]: time="2025-05-13T00:22:53.411783282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:22:53.413748 containerd[1466]: time="2025-05-13T00:22:53.411861820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:22:53.413748 containerd[1466]: time="2025-05-13T00:22:53.411872059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:53.413748 containerd[1466]: time="2025-05-13T00:22:53.412075755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:53.421993 containerd[1466]: time="2025-05-13T00:22:53.421866623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:22:53.422132 containerd[1466]: time="2025-05-13T00:22:53.421963045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:22:53.422132 containerd[1466]: time="2025-05-13T00:22:53.421982411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:53.422132 containerd[1466]: time="2025-05-13T00:22:53.422105535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:53.422967 containerd[1466]: time="2025-05-13T00:22:53.420998099Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:22:53.423201 containerd[1466]: time="2025-05-13T00:22:53.423015828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:22:53.423201 containerd[1466]: time="2025-05-13T00:22:53.423070450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:53.424258 containerd[1466]: time="2025-05-13T00:22:53.423798699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:53.434413 systemd[1]: Started cri-containerd-b4a0e0d8a8b722000f5d0539b952b5334a5671c12e821a3755ae888cd6dc0bf0.scope - libcontainer container b4a0e0d8a8b722000f5d0539b952b5334a5671c12e821a3755ae888cd6dc0bf0. May 13 00:22:53.443314 systemd[1]: Started cri-containerd-62e2067cfcf87511c9ccc9eb940a4cb9f62d12ed78f5fd5df0f8655a15df7fab.scope - libcontainer container 62e2067cfcf87511c9ccc9eb940a4cb9f62d12ed78f5fd5df0f8655a15df7fab. May 13 00:22:53.448350 systemd[1]: Started cri-containerd-cf0e84e1f96f9fd65f45f1cecc5e33a2315e6089bbeb028b9a23300f5c2f9a1a.scope - libcontainer container cf0e84e1f96f9fd65f45f1cecc5e33a2315e6089bbeb028b9a23300f5c2f9a1a. May 13 00:22:53.471620 containerd[1466]: time="2025-05-13T00:22:53.471414353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vqfwd,Uid:e2a274b6-199f-44c2-b97d-8b6173fc1ffa,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4a0e0d8a8b722000f5d0539b952b5334a5671c12e821a3755ae888cd6dc0bf0\"" May 13 00:22:53.472627 kubelet[2539]: E0513 00:22:53.472598 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:53.476025 containerd[1466]: time="2025-05-13T00:22:53.475982728Z" level=info msg="CreateContainer within sandbox \"b4a0e0d8a8b722000f5d0539b952b5334a5671c12e821a3755ae888cd6dc0bf0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:22:53.479824 containerd[1466]: time="2025-05-13T00:22:53.479726311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9r966,Uid:758e91be-2be5-424c-bf4e-391282fb8246,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf0e84e1f96f9fd65f45f1cecc5e33a2315e6089bbeb028b9a23300f5c2f9a1a\"" May 13 00:22:53.480490 kubelet[2539]: E0513 00:22:53.480456 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:53.481742 containerd[1466]: time="2025-05-13T00:22:53.481707150Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 00:22:53.494401 containerd[1466]: time="2025-05-13T00:22:53.494339995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2mvrr,Uid:41345458-9fde-43c2-bdf4-802f9ddb2c96,Namespace:kube-system,Attempt:0,} returns sandbox id \"62e2067cfcf87511c9ccc9eb940a4cb9f62d12ed78f5fd5df0f8655a15df7fab\"" May 13 00:22:53.495384 kubelet[2539]: E0513 00:22:53.495237 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:53.497523 containerd[1466]: time="2025-05-13T00:22:53.497457525Z" level=info msg="CreateContainer within sandbox \"b4a0e0d8a8b722000f5d0539b952b5334a5671c12e821a3755ae888cd6dc0bf0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"54e60d70760aca26a50b66fae336930dd4b3bc48e983e683f2ddfffb51e03710\"" May 13 00:22:53.497926 containerd[1466]: time="2025-05-13T00:22:53.497905091Z" level=info msg="StartContainer for \"54e60d70760aca26a50b66fae336930dd4b3bc48e983e683f2ddfffb51e03710\"" May 13 00:22:53.527440 systemd[1]: Started cri-containerd-54e60d70760aca26a50b66fae336930dd4b3bc48e983e683f2ddfffb51e03710.scope - libcontainer container 54e60d70760aca26a50b66fae336930dd4b3bc48e983e683f2ddfffb51e03710. May 13 00:22:53.560311 containerd[1466]: time="2025-05-13T00:22:53.559589599Z" level=info msg="StartContainer for \"54e60d70760aca26a50b66fae336930dd4b3bc48e983e683f2ddfffb51e03710\" returns successfully" May 13 00:22:54.060422 kubelet[2539]: E0513 00:22:54.060394 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:54.069196 kubelet[2539]: I0513 00:22:54.069117 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vqfwd" podStartSLOduration=2.069098029 podStartE2EDuration="2.069098029s" podCreationTimestamp="2025-05-13 00:22:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:22:54.068551695 +0000 UTC m=+8.113941235" watchObservedRunningTime="2025-05-13 00:22:54.069098029 +0000 UTC m=+8.114487569" May 13 00:22:55.847472 kubelet[2539]: E0513 00:22:55.847430 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:56.063940 kubelet[2539]: E0513 00:22:56.063901 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:56.377003 kubelet[2539]: E0513 00:22:56.376965 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:57.064694 kubelet[2539]: E0513 00:22:57.064650 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:59.313355 kubelet[2539]: E0513 00:22:59.313296 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:00.069499 kubelet[2539]: E0513 00:23:00.069467 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:04.718242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4290328277.mount: Deactivated successfully. May 13 00:23:07.166770 containerd[1466]: time="2025-05-13T00:23:07.166680937Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:07.168248 containerd[1466]: time="2025-05-13T00:23:07.168221046Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 13 00:23:07.187403 containerd[1466]: time="2025-05-13T00:23:07.187331741Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:07.188988 containerd[1466]: time="2025-05-13T00:23:07.188940429Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.707195218s" May 13 00:23:07.188988 containerd[1466]: time="2025-05-13T00:23:07.188980665Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 13 00:23:07.200197 containerd[1466]: time="2025-05-13T00:23:07.198597448Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 00:23:07.209873 containerd[1466]: time="2025-05-13T00:23:07.209825133Z" level=info msg="CreateContainer within sandbox \"cf0e84e1f96f9fd65f45f1cecc5e33a2315e6089bbeb028b9a23300f5c2f9a1a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:23:07.225560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1907907168.mount: Deactivated successfully. May 13 00:23:07.226967 containerd[1466]: time="2025-05-13T00:23:07.226923520Z" level=info msg="CreateContainer within sandbox \"cf0e84e1f96f9fd65f45f1cecc5e33a2315e6089bbeb028b9a23300f5c2f9a1a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"34264edb6a043349e258b84be77e7a3e9091eb82e0155ca28cb85e663d38fd3b\"" May 13 00:23:07.229539 containerd[1466]: time="2025-05-13T00:23:07.229508025Z" level=info msg="StartContainer for \"34264edb6a043349e258b84be77e7a3e9091eb82e0155ca28cb85e663d38fd3b\"" May 13 00:23:07.252560 systemd[1]: run-containerd-runc-k8s.io-34264edb6a043349e258b84be77e7a3e9091eb82e0155ca28cb85e663d38fd3b-runc.DMBFyX.mount: Deactivated successfully. May 13 00:23:07.268414 systemd[1]: Started cri-containerd-34264edb6a043349e258b84be77e7a3e9091eb82e0155ca28cb85e663d38fd3b.scope - libcontainer container 34264edb6a043349e258b84be77e7a3e9091eb82e0155ca28cb85e663d38fd3b. May 13 00:23:07.295582 containerd[1466]: time="2025-05-13T00:23:07.295525108Z" level=info msg="StartContainer for \"34264edb6a043349e258b84be77e7a3e9091eb82e0155ca28cb85e663d38fd3b\" returns successfully" May 13 00:23:07.304798 systemd[1]: cri-containerd-34264edb6a043349e258b84be77e7a3e9091eb82e0155ca28cb85e663d38fd3b.scope: Deactivated successfully. May 13 00:23:07.955085 containerd[1466]: time="2025-05-13T00:23:07.952526193Z" level=info msg="shim disconnected" id=34264edb6a043349e258b84be77e7a3e9091eb82e0155ca28cb85e663d38fd3b namespace=k8s.io May 13 00:23:07.955085 containerd[1466]: time="2025-05-13T00:23:07.955074059Z" level=warning msg="cleaning up after shim disconnected" id=34264edb6a043349e258b84be77e7a3e9091eb82e0155ca28cb85e663d38fd3b namespace=k8s.io May 13 00:23:07.955085 containerd[1466]: time="2025-05-13T00:23:07.955088887Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:23:08.083197 kubelet[2539]: E0513 00:23:08.083128 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:08.085988 containerd[1466]: time="2025-05-13T00:23:08.085954389Z" level=info msg="CreateContainer within sandbox \"cf0e84e1f96f9fd65f45f1cecc5e33a2315e6089bbeb028b9a23300f5c2f9a1a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:23:08.131295 containerd[1466]: time="2025-05-13T00:23:08.131210631Z" level=info msg="CreateContainer within sandbox \"cf0e84e1f96f9fd65f45f1cecc5e33a2315e6089bbeb028b9a23300f5c2f9a1a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1661b657cf51d35fe30446187d4a171bbe0b13607c2481d7c1dd0012de324a09\"" May 13 00:23:08.132005 containerd[1466]: time="2025-05-13T00:23:08.131896542Z" level=info msg="StartContainer for \"1661b657cf51d35fe30446187d4a171bbe0b13607c2481d7c1dd0012de324a09\"" May 13 00:23:08.162428 systemd[1]: Started cri-containerd-1661b657cf51d35fe30446187d4a171bbe0b13607c2481d7c1dd0012de324a09.scope - libcontainer container 1661b657cf51d35fe30446187d4a171bbe0b13607c2481d7c1dd0012de324a09. May 13 00:23:08.188378 containerd[1466]: time="2025-05-13T00:23:08.188332496Z" level=info msg="StartContainer for \"1661b657cf51d35fe30446187d4a171bbe0b13607c2481d7c1dd0012de324a09\" returns successfully" May 13 00:23:08.200200 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:23:08.200441 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 00:23:08.200513 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 13 00:23:08.205722 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:23:08.205941 systemd[1]: cri-containerd-1661b657cf51d35fe30446187d4a171bbe0b13607c2481d7c1dd0012de324a09.scope: Deactivated successfully. May 13 00:23:08.223048 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34264edb6a043349e258b84be77e7a3e9091eb82e0155ca28cb85e663d38fd3b-rootfs.mount: Deactivated successfully. May 13 00:23:08.226384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1661b657cf51d35fe30446187d4a171bbe0b13607c2481d7c1dd0012de324a09-rootfs.mount: Deactivated successfully. May 13 00:23:08.269397 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:23:08.269565 containerd[1466]: time="2025-05-13T00:23:08.269345183Z" level=info msg="shim disconnected" id=1661b657cf51d35fe30446187d4a171bbe0b13607c2481d7c1dd0012de324a09 namespace=k8s.io May 13 00:23:08.269565 containerd[1466]: time="2025-05-13T00:23:08.269414965Z" level=warning msg="cleaning up after shim disconnected" id=1661b657cf51d35fe30446187d4a171bbe0b13607c2481d7c1dd0012de324a09 namespace=k8s.io May 13 00:23:08.269565 containerd[1466]: time="2025-05-13T00:23:08.269423191Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:23:09.086057 kubelet[2539]: E0513 00:23:09.086023 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:09.087994 containerd[1466]: time="2025-05-13T00:23:09.087950772Z" level=info msg="CreateContainer within sandbox \"cf0e84e1f96f9fd65f45f1cecc5e33a2315e6089bbeb028b9a23300f5c2f9a1a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:23:09.100588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2070085773.mount: Deactivated successfully. May 13 00:23:09.158419 containerd[1466]: time="2025-05-13T00:23:09.158359314Z" level=info msg="CreateContainer within sandbox \"cf0e84e1f96f9fd65f45f1cecc5e33a2315e6089bbeb028b9a23300f5c2f9a1a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0ad8bf4627de2f7d1ecb730333261671cac149391f0a83a0f376da8084e83e38\"" May 13 00:23:09.159000 containerd[1466]: time="2025-05-13T00:23:09.158943453Z" level=info msg="StartContainer for \"0ad8bf4627de2f7d1ecb730333261671cac149391f0a83a0f376da8084e83e38\"" May 13 00:23:09.200562 systemd[1]: Started cri-containerd-0ad8bf4627de2f7d1ecb730333261671cac149391f0a83a0f376da8084e83e38.scope - libcontainer container 0ad8bf4627de2f7d1ecb730333261671cac149391f0a83a0f376da8084e83e38. May 13 00:23:09.223138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3929155957.mount: Deactivated successfully. May 13 00:23:09.234652 containerd[1466]: time="2025-05-13T00:23:09.234602194Z" level=info msg="StartContainer for \"0ad8bf4627de2f7d1ecb730333261671cac149391f0a83a0f376da8084e83e38\" returns successfully" May 13 00:23:09.236149 systemd[1]: cri-containerd-0ad8bf4627de2f7d1ecb730333261671cac149391f0a83a0f376da8084e83e38.scope: Deactivated successfully. May 13 00:23:09.258016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ad8bf4627de2f7d1ecb730333261671cac149391f0a83a0f376da8084e83e38-rootfs.mount: Deactivated successfully. May 13 00:23:09.335253 containerd[1466]: time="2025-05-13T00:23:09.335171037Z" level=info msg="shim disconnected" id=0ad8bf4627de2f7d1ecb730333261671cac149391f0a83a0f376da8084e83e38 namespace=k8s.io May 13 00:23:09.335253 containerd[1466]: time="2025-05-13T00:23:09.335244756Z" level=warning msg="cleaning up after shim disconnected" id=0ad8bf4627de2f7d1ecb730333261671cac149391f0a83a0f376da8084e83e38 namespace=k8s.io May 13 00:23:09.335253 containerd[1466]: time="2025-05-13T00:23:09.335256469Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:23:09.449311 containerd[1466]: time="2025-05-13T00:23:09.449250092Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:09.450012 containerd[1466]: time="2025-05-13T00:23:09.449962763Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 13 00:23:09.451348 containerd[1466]: time="2025-05-13T00:23:09.451314025Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:09.452855 containerd[1466]: time="2025-05-13T00:23:09.452816102Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.254134436s" May 13 00:23:09.452916 containerd[1466]: time="2025-05-13T00:23:09.452864182Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 13 00:23:09.454827 containerd[1466]: time="2025-05-13T00:23:09.454793793Z" level=info msg="CreateContainer within sandbox \"62e2067cfcf87511c9ccc9eb940a4cb9f62d12ed78f5fd5df0f8655a15df7fab\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 00:23:09.468031 containerd[1466]: time="2025-05-13T00:23:09.467975717Z" level=info msg="CreateContainer within sandbox \"62e2067cfcf87511c9ccc9eb940a4cb9f62d12ed78f5fd5df0f8655a15df7fab\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"392ec823a5d6f3ad49b383258de2958ab08f1eff200718656b3f7616d7d456a2\"" May 13 00:23:09.468746 containerd[1466]: time="2025-05-13T00:23:09.468590463Z" level=info msg="StartContainer for \"392ec823a5d6f3ad49b383258de2958ab08f1eff200718656b3f7616d7d456a2\"" May 13 00:23:09.498376 systemd[1]: Started cri-containerd-392ec823a5d6f3ad49b383258de2958ab08f1eff200718656b3f7616d7d456a2.scope - libcontainer container 392ec823a5d6f3ad49b383258de2958ab08f1eff200718656b3f7616d7d456a2. May 13 00:23:09.524360 containerd[1466]: time="2025-05-13T00:23:09.524316298Z" level=info msg="StartContainer for \"392ec823a5d6f3ad49b383258de2958ab08f1eff200718656b3f7616d7d456a2\" returns successfully" May 13 00:23:10.088452 kubelet[2539]: E0513 00:23:10.088414 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:10.090296 kubelet[2539]: E0513 00:23:10.090269 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:10.091755 containerd[1466]: time="2025-05-13T00:23:10.091715879Z" level=info msg="CreateContainer within sandbox \"cf0e84e1f96f9fd65f45f1cecc5e33a2315e6089bbeb028b9a23300f5c2f9a1a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:23:10.472796 containerd[1466]: time="2025-05-13T00:23:10.472729362Z" level=info msg="CreateContainer within sandbox \"cf0e84e1f96f9fd65f45f1cecc5e33a2315e6089bbeb028b9a23300f5c2f9a1a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b259896a997ce5925f1ee76893df10c97fb1df2195066362b3847255bc9f6b32\"" May 13 00:23:10.475630 containerd[1466]: time="2025-05-13T00:23:10.475592759Z" level=info msg="StartContainer for \"b259896a997ce5925f1ee76893df10c97fb1df2195066362b3847255bc9f6b32\"" May 13 00:23:10.520238 kubelet[2539]: I0513 00:23:10.518596 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-2mvrr" podStartSLOduration=2.560946304 podStartE2EDuration="18.518566284s" podCreationTimestamp="2025-05-13 00:22:52 +0000 UTC" firstStartedPulling="2025-05-13 00:22:53.495868697 +0000 UTC m=+7.541258237" lastFinishedPulling="2025-05-13 00:23:09.453488677 +0000 UTC m=+23.498878217" observedRunningTime="2025-05-13 00:23:10.481984554 +0000 UTC m=+24.527374084" watchObservedRunningTime="2025-05-13 00:23:10.518566284 +0000 UTC m=+24.563955824" May 13 00:23:10.573422 systemd[1]: Started cri-containerd-b259896a997ce5925f1ee76893df10c97fb1df2195066362b3847255bc9f6b32.scope - libcontainer container b259896a997ce5925f1ee76893df10c97fb1df2195066362b3847255bc9f6b32. May 13 00:23:10.601538 systemd[1]: cri-containerd-b259896a997ce5925f1ee76893df10c97fb1df2195066362b3847255bc9f6b32.scope: Deactivated successfully. May 13 00:23:10.604924 containerd[1466]: time="2025-05-13T00:23:10.604849785Z" level=info msg="StartContainer for \"b259896a997ce5925f1ee76893df10c97fb1df2195066362b3847255bc9f6b32\" returns successfully" May 13 00:23:10.832884 containerd[1466]: time="2025-05-13T00:23:10.832731058Z" level=info msg="shim disconnected" id=b259896a997ce5925f1ee76893df10c97fb1df2195066362b3847255bc9f6b32 namespace=k8s.io May 13 00:23:10.832884 containerd[1466]: time="2025-05-13T00:23:10.832790860Z" level=warning msg="cleaning up after shim disconnected" id=b259896a997ce5925f1ee76893df10c97fb1df2195066362b3847255bc9f6b32 namespace=k8s.io May 13 00:23:10.832884 containerd[1466]: time="2025-05-13T00:23:10.832800378Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:23:11.009380 systemd[1]: Started sshd@9-10.0.0.45:22-10.0.0.1:52772.service - OpenSSH per-connection server daemon (10.0.0.1:52772). May 13 00:23:11.043759 sshd[3250]: Accepted publickey for core from 10.0.0.1 port 52772 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:23:11.045512 sshd[3250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:11.049275 systemd-logind[1452]: New session 10 of user core. May 13 00:23:11.059354 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 00:23:11.101263 kubelet[2539]: E0513 00:23:11.101137 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:11.101709 kubelet[2539]: E0513 00:23:11.101299 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:11.103550 containerd[1466]: time="2025-05-13T00:23:11.103498886Z" level=info msg="CreateContainer within sandbox \"cf0e84e1f96f9fd65f45f1cecc5e33a2315e6089bbeb028b9a23300f5c2f9a1a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:23:11.125928 containerd[1466]: time="2025-05-13T00:23:11.125738267Z" level=info msg="CreateContainer within sandbox \"cf0e84e1f96f9fd65f45f1cecc5e33a2315e6089bbeb028b9a23300f5c2f9a1a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8e180a29c374837debc8da33e786489c94f744313c121584185ac3c670537a05\"" May 13 00:23:11.127206 containerd[1466]: time="2025-05-13T00:23:11.126300905Z" level=info msg="StartContainer for \"8e180a29c374837debc8da33e786489c94f744313c121584185ac3c670537a05\"" May 13 00:23:11.158395 systemd[1]: Started cri-containerd-8e180a29c374837debc8da33e786489c94f744313c121584185ac3c670537a05.scope - libcontainer container 8e180a29c374837debc8da33e786489c94f744313c121584185ac3c670537a05. May 13 00:23:11.197383 containerd[1466]: time="2025-05-13T00:23:11.197327921Z" level=info msg="StartContainer for \"8e180a29c374837debc8da33e786489c94f744313c121584185ac3c670537a05\" returns successfully" May 13 00:23:11.203286 sshd[3250]: pam_unix(sshd:session): session closed for user core May 13 00:23:11.210448 systemd[1]: sshd@9-10.0.0.45:22-10.0.0.1:52772.service: Deactivated successfully. May 13 00:23:11.212933 systemd[1]: session-10.scope: Deactivated successfully. May 13 00:23:11.213976 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. May 13 00:23:11.216209 systemd-logind[1452]: Removed session 10. May 13 00:23:11.224339 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b259896a997ce5925f1ee76893df10c97fb1df2195066362b3847255bc9f6b32-rootfs.mount: Deactivated successfully. May 13 00:23:11.315930 kubelet[2539]: I0513 00:23:11.315859 2539 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 13 00:23:11.355889 systemd[1]: Created slice kubepods-burstable-pod5995fa2a_753b_4d74_946b_6569f770c25f.slice - libcontainer container kubepods-burstable-pod5995fa2a_753b_4d74_946b_6569f770c25f.slice. May 13 00:23:11.361362 systemd[1]: Created slice kubepods-burstable-podfd1dc48c_ae93_4954_abd0_df5433b90aa3.slice - libcontainer container kubepods-burstable-podfd1dc48c_ae93_4954_abd0_df5433b90aa3.slice. May 13 00:23:11.397357 kubelet[2539]: I0513 00:23:11.397298 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5995fa2a-753b-4d74-946b-6569f770c25f-config-volume\") pod \"coredns-668d6bf9bc-vwvm2\" (UID: \"5995fa2a-753b-4d74-946b-6569f770c25f\") " pod="kube-system/coredns-668d6bf9bc-vwvm2" May 13 00:23:11.397357 kubelet[2539]: I0513 00:23:11.397354 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd1dc48c-ae93-4954-abd0-df5433b90aa3-config-volume\") pod \"coredns-668d6bf9bc-sfvkz\" (UID: \"fd1dc48c-ae93-4954-abd0-df5433b90aa3\") " pod="kube-system/coredns-668d6bf9bc-sfvkz" May 13 00:23:11.397471 kubelet[2539]: I0513 00:23:11.397375 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7kx4\" (UniqueName: \"kubernetes.io/projected/5995fa2a-753b-4d74-946b-6569f770c25f-kube-api-access-m7kx4\") pod \"coredns-668d6bf9bc-vwvm2\" (UID: \"5995fa2a-753b-4d74-946b-6569f770c25f\") " pod="kube-system/coredns-668d6bf9bc-vwvm2" May 13 00:23:11.397471 kubelet[2539]: I0513 00:23:11.397401 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwc4c\" (UniqueName: \"kubernetes.io/projected/fd1dc48c-ae93-4954-abd0-df5433b90aa3-kube-api-access-zwc4c\") pod \"coredns-668d6bf9bc-sfvkz\" (UID: \"fd1dc48c-ae93-4954-abd0-df5433b90aa3\") " pod="kube-system/coredns-668d6bf9bc-sfvkz" May 13 00:23:11.660827 kubelet[2539]: E0513 00:23:11.660720 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:11.662275 containerd[1466]: time="2025-05-13T00:23:11.662223758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vwvm2,Uid:5995fa2a-753b-4d74-946b-6569f770c25f,Namespace:kube-system,Attempt:0,}" May 13 00:23:11.664445 kubelet[2539]: E0513 00:23:11.664391 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:11.665222 containerd[1466]: time="2025-05-13T00:23:11.664850799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sfvkz,Uid:fd1dc48c-ae93-4954-abd0-df5433b90aa3,Namespace:kube-system,Attempt:0,}" May 13 00:23:12.105931 kubelet[2539]: E0513 00:23:12.105785 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:12.119141 kubelet[2539]: I0513 00:23:12.118949 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9r966" podStartSLOduration=6.402162843 podStartE2EDuration="20.118931486s" podCreationTimestamp="2025-05-13 00:22:52 +0000 UTC" firstStartedPulling="2025-05-13 00:22:53.481239946 +0000 UTC m=+7.526629486" lastFinishedPulling="2025-05-13 00:23:07.198008589 +0000 UTC m=+21.243398129" observedRunningTime="2025-05-13 00:23:12.118666827 +0000 UTC m=+26.164056367" watchObservedRunningTime="2025-05-13 00:23:12.118931486 +0000 UTC m=+26.164321016" May 13 00:23:13.107832 kubelet[2539]: E0513 00:23:13.107798 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:13.369107 systemd-networkd[1391]: cilium_host: Link UP May 13 00:23:13.369337 systemd-networkd[1391]: cilium_net: Link UP May 13 00:23:13.369540 systemd-networkd[1391]: cilium_net: Gained carrier May 13 00:23:13.369733 systemd-networkd[1391]: cilium_host: Gained carrier May 13 00:23:13.469695 systemd-networkd[1391]: cilium_vxlan: Link UP May 13 00:23:13.469704 systemd-networkd[1391]: cilium_vxlan: Gained carrier May 13 00:23:13.544370 systemd-networkd[1391]: cilium_net: Gained IPv6LL May 13 00:23:13.576323 systemd-networkd[1391]: cilium_host: Gained IPv6LL May 13 00:23:13.688218 kernel: NET: Registered PF_ALG protocol family May 13 00:23:14.109735 kubelet[2539]: E0513 00:23:14.109706 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:14.310957 systemd-networkd[1391]: lxc_health: Link UP May 13 00:23:14.323269 systemd-networkd[1391]: lxc_health: Gained carrier May 13 00:23:14.537303 systemd-networkd[1391]: cilium_vxlan: Gained IPv6LL May 13 00:23:14.799591 systemd-networkd[1391]: lxc81f7fb3f1f67: Link UP May 13 00:23:14.814209 kernel: eth0: renamed from tmp2dacc May 13 00:23:14.821109 systemd-networkd[1391]: lxc545467a96255: Link UP May 13 00:23:14.829003 systemd-networkd[1391]: lxc81f7fb3f1f67: Gained carrier May 13 00:23:14.830222 kernel: eth0: renamed from tmp0aa7d May 13 00:23:14.839513 systemd-networkd[1391]: lxc545467a96255: Gained carrier May 13 00:23:15.138806 kubelet[2539]: E0513 00:23:15.138777 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:16.008316 systemd-networkd[1391]: lxc_health: Gained IPv6LL May 13 00:23:16.113014 kubelet[2539]: E0513 00:23:16.112974 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:16.200343 systemd-networkd[1391]: lxc545467a96255: Gained IPv6LL May 13 00:23:16.213122 systemd[1]: Started sshd@10-10.0.0.45:22-10.0.0.1:60538.service - OpenSSH per-connection server daemon (10.0.0.1:60538). May 13 00:23:16.250066 sshd[3779]: Accepted publickey for core from 10.0.0.1 port 60538 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:23:16.251524 sshd[3779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:16.255811 systemd-logind[1452]: New session 11 of user core. May 13 00:23:16.259321 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 00:23:16.328357 systemd-networkd[1391]: lxc81f7fb3f1f67: Gained IPv6LL May 13 00:23:16.382170 sshd[3779]: pam_unix(sshd:session): session closed for user core May 13 00:23:16.386420 systemd[1]: sshd@10-10.0.0.45:22-10.0.0.1:60538.service: Deactivated successfully. May 13 00:23:16.388422 systemd[1]: session-11.scope: Deactivated successfully. May 13 00:23:16.389117 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. May 13 00:23:16.390107 systemd-logind[1452]: Removed session 11. May 13 00:23:17.114103 kubelet[2539]: E0513 00:23:17.114071 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:18.326499 containerd[1466]: time="2025-05-13T00:23:18.326395149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:23:18.327642 containerd[1466]: time="2025-05-13T00:23:18.326982373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:23:18.327769 containerd[1466]: time="2025-05-13T00:23:18.327657772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:23:18.327769 containerd[1466]: time="2025-05-13T00:23:18.327740867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:23:18.355324 systemd[1]: Started cri-containerd-0aa7deec6c0ec25d5a9634132f0b71992139a2f93ff559a92ec390712fb73889.scope - libcontainer container 0aa7deec6c0ec25d5a9634132f0b71992139a2f93ff559a92ec390712fb73889. May 13 00:23:18.365922 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:23:18.395605 containerd[1466]: time="2025-05-13T00:23:18.395567776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vwvm2,Uid:5995fa2a-753b-4d74-946b-6569f770c25f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0aa7deec6c0ec25d5a9634132f0b71992139a2f93ff559a92ec390712fb73889\"" May 13 00:23:18.396291 kubelet[2539]: E0513 00:23:18.396265 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:18.398037 containerd[1466]: time="2025-05-13T00:23:18.397992191Z" level=info msg="CreateContainer within sandbox \"0aa7deec6c0ec25d5a9634132f0b71992139a2f93ff559a92ec390712fb73889\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:23:18.399775 containerd[1466]: time="2025-05-13T00:23:18.399458958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:23:18.399775 containerd[1466]: time="2025-05-13T00:23:18.399544658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:23:18.399775 containerd[1466]: time="2025-05-13T00:23:18.399560759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:23:18.399775 containerd[1466]: time="2025-05-13T00:23:18.399716512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:23:18.420301 systemd[1]: Started cri-containerd-2dacc440e29a3bde2f956eb92220c101edf333ef78ae8bdcd9757db8108ab985.scope - libcontainer container 2dacc440e29a3bde2f956eb92220c101edf333ef78ae8bdcd9757db8108ab985. May 13 00:23:18.430826 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:23:18.453617 containerd[1466]: time="2025-05-13T00:23:18.453556879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sfvkz,Uid:fd1dc48c-ae93-4954-abd0-df5433b90aa3,Namespace:kube-system,Attempt:0,} returns sandbox id \"2dacc440e29a3bde2f956eb92220c101edf333ef78ae8bdcd9757db8108ab985\"" May 13 00:23:18.454245 kubelet[2539]: E0513 00:23:18.454214 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:18.455892 containerd[1466]: time="2025-05-13T00:23:18.455852523Z" level=info msg="CreateContainer within sandbox \"2dacc440e29a3bde2f956eb92220c101edf333ef78ae8bdcd9757db8108ab985\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:23:18.950586 containerd[1466]: time="2025-05-13T00:23:18.950535282Z" level=info msg="CreateContainer within sandbox \"0aa7deec6c0ec25d5a9634132f0b71992139a2f93ff559a92ec390712fb73889\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b7986c38f801c9f124f68a8fb3f219f4b420c981b4f36ebb863a3e220bbb143f\"" May 13 00:23:18.951239 containerd[1466]: time="2025-05-13T00:23:18.951171377Z" level=info msg="StartContainer for \"b7986c38f801c9f124f68a8fb3f219f4b420c981b4f36ebb863a3e220bbb143f\"" May 13 00:23:18.979312 systemd[1]: Started cri-containerd-b7986c38f801c9f124f68a8fb3f219f4b420c981b4f36ebb863a3e220bbb143f.scope - libcontainer container b7986c38f801c9f124f68a8fb3f219f4b420c981b4f36ebb863a3e220bbb143f. May 13 00:23:19.156462 containerd[1466]: time="2025-05-13T00:23:19.156110961Z" level=info msg="StartContainer for \"b7986c38f801c9f124f68a8fb3f219f4b420c981b4f36ebb863a3e220bbb143f\" returns successfully" May 13 00:23:19.159754 kubelet[2539]: E0513 00:23:19.159729 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:19.163713 containerd[1466]: time="2025-05-13T00:23:19.163657370Z" level=info msg="CreateContainer within sandbox \"2dacc440e29a3bde2f956eb92220c101edf333ef78ae8bdcd9757db8108ab985\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d0116486293fdd5b522d5e686215268648f74b83aa81ab4b7ae4cbbef419c30b\"" May 13 00:23:19.164248 containerd[1466]: time="2025-05-13T00:23:19.164215748Z" level=info msg="StartContainer for \"d0116486293fdd5b522d5e686215268648f74b83aa81ab4b7ae4cbbef419c30b\"" May 13 00:23:19.170364 kubelet[2539]: I0513 00:23:19.170303 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vwvm2" podStartSLOduration=27.170286664 podStartE2EDuration="27.170286664s" podCreationTimestamp="2025-05-13 00:22:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:23:19.16981048 +0000 UTC m=+33.215200020" watchObservedRunningTime="2025-05-13 00:23:19.170286664 +0000 UTC m=+33.215676204" May 13 00:23:19.198369 systemd[1]: Started cri-containerd-d0116486293fdd5b522d5e686215268648f74b83aa81ab4b7ae4cbbef419c30b.scope - libcontainer container d0116486293fdd5b522d5e686215268648f74b83aa81ab4b7ae4cbbef419c30b. May 13 00:23:19.229164 containerd[1466]: time="2025-05-13T00:23:19.229043668Z" level=info msg="StartContainer for \"d0116486293fdd5b522d5e686215268648f74b83aa81ab4b7ae4cbbef419c30b\" returns successfully" May 13 00:23:19.332843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2723133699.mount: Deactivated successfully. May 13 00:23:20.165224 kubelet[2539]: E0513 00:23:20.164938 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:20.165224 kubelet[2539]: E0513 00:23:20.165025 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:20.174220 kubelet[2539]: I0513 00:23:20.173779 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-sfvkz" podStartSLOduration=28.173762346 podStartE2EDuration="28.173762346s" podCreationTimestamp="2025-05-13 00:22:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:23:20.173605382 +0000 UTC m=+34.218994922" watchObservedRunningTime="2025-05-13 00:23:20.173762346 +0000 UTC m=+34.219151886" May 13 00:23:21.166724 kubelet[2539]: E0513 00:23:21.166693 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:21.167250 kubelet[2539]: E0513 00:23:21.166802 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:21.395681 systemd[1]: Started sshd@11-10.0.0.45:22-10.0.0.1:60544.service - OpenSSH per-connection server daemon (10.0.0.1:60544). May 13 00:23:21.431687 sshd[3973]: Accepted publickey for core from 10.0.0.1 port 60544 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:23:21.433159 sshd[3973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:21.437144 systemd-logind[1452]: New session 12 of user core. May 13 00:23:21.446299 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 00:23:21.569911 sshd[3973]: pam_unix(sshd:session): session closed for user core May 13 00:23:21.573887 systemd[1]: sshd@11-10.0.0.45:22-10.0.0.1:60544.service: Deactivated successfully. May 13 00:23:21.575750 systemd[1]: session-12.scope: Deactivated successfully. May 13 00:23:21.576457 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. May 13 00:23:21.577323 systemd-logind[1452]: Removed session 12. May 13 00:23:22.168371 kubelet[2539]: E0513 00:23:22.168333 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:23.176384 kubelet[2539]: E0513 00:23:23.176358 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:26.588142 systemd[1]: Started sshd@12-10.0.0.45:22-10.0.0.1:53160.service - OpenSSH per-connection server daemon (10.0.0.1:53160). May 13 00:23:26.621805 sshd[3991]: Accepted publickey for core from 10.0.0.1 port 53160 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:23:26.623329 sshd[3991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:26.626929 systemd-logind[1452]: New session 13 of user core. May 13 00:23:26.636324 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 00:23:26.739464 sshd[3991]: pam_unix(sshd:session): session closed for user core May 13 00:23:26.749048 systemd[1]: sshd@12-10.0.0.45:22-10.0.0.1:53160.service: Deactivated successfully. May 13 00:23:26.751101 systemd[1]: session-13.scope: Deactivated successfully. May 13 00:23:26.752555 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. May 13 00:23:26.761445 systemd[1]: Started sshd@13-10.0.0.45:22-10.0.0.1:53170.service - OpenSSH per-connection server daemon (10.0.0.1:53170). May 13 00:23:26.762435 systemd-logind[1452]: Removed session 13. May 13 00:23:26.789348 sshd[4006]: Accepted publickey for core from 10.0.0.1 port 53170 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:23:26.790750 sshd[4006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:26.794260 systemd-logind[1452]: New session 14 of user core. May 13 00:23:26.807294 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 00:23:26.966023 sshd[4006]: pam_unix(sshd:session): session closed for user core May 13 00:23:26.975998 systemd[1]: sshd@13-10.0.0.45:22-10.0.0.1:53170.service: Deactivated successfully. May 13 00:23:26.977605 systemd[1]: session-14.scope: Deactivated successfully. May 13 00:23:26.979154 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. May 13 00:23:26.987402 systemd[1]: Started sshd@14-10.0.0.45:22-10.0.0.1:53184.service - OpenSSH per-connection server daemon (10.0.0.1:53184). May 13 00:23:26.988266 systemd-logind[1452]: Removed session 14. May 13 00:23:27.020691 sshd[4019]: Accepted publickey for core from 10.0.0.1 port 53184 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:23:27.022125 sshd[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:27.025885 systemd-logind[1452]: New session 15 of user core. May 13 00:23:27.033464 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 00:23:27.215515 sshd[4019]: pam_unix(sshd:session): session closed for user core May 13 00:23:27.219890 systemd[1]: sshd@14-10.0.0.45:22-10.0.0.1:53184.service: Deactivated successfully. May 13 00:23:27.221755 systemd[1]: session-15.scope: Deactivated successfully. May 13 00:23:27.222453 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. May 13 00:23:27.223346 systemd-logind[1452]: Removed session 15. May 13 00:23:32.225693 systemd[1]: Started sshd@15-10.0.0.45:22-10.0.0.1:53188.service - OpenSSH per-connection server daemon (10.0.0.1:53188). May 13 00:23:32.258204 sshd[4035]: Accepted publickey for core from 10.0.0.1 port 53188 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:23:32.259673 sshd[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:32.263366 systemd-logind[1452]: New session 16 of user core. May 13 00:23:32.274327 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 00:23:32.375089 sshd[4035]: pam_unix(sshd:session): session closed for user core May 13 00:23:32.378638 systemd[1]: sshd@15-10.0.0.45:22-10.0.0.1:53188.service: Deactivated successfully. May 13 00:23:32.380379 systemd[1]: session-16.scope: Deactivated successfully. May 13 00:23:32.381010 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. May 13 00:23:32.381816 systemd-logind[1452]: Removed session 16. May 13 00:23:37.392783 systemd[1]: Started sshd@16-10.0.0.45:22-10.0.0.1:53462.service - OpenSSH per-connection server daemon (10.0.0.1:53462). May 13 00:23:37.427631 sshd[4049]: Accepted publickey for core from 10.0.0.1 port 53462 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:23:37.429069 sshd[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:37.432647 systemd-logind[1452]: New session 17 of user core. May 13 00:23:37.443304 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 00:23:37.544133 sshd[4049]: pam_unix(sshd:session): session closed for user core May 13 00:23:37.556970 systemd[1]: sshd@16-10.0.0.45:22-10.0.0.1:53462.service: Deactivated successfully. May 13 00:23:37.558826 systemd[1]: session-17.scope: Deactivated successfully. May 13 00:23:37.560296 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. May 13 00:23:37.567502 systemd[1]: Started sshd@17-10.0.0.45:22-10.0.0.1:53464.service - OpenSSH per-connection server daemon (10.0.0.1:53464). May 13 00:23:37.568550 systemd-logind[1452]: Removed session 17. May 13 00:23:37.595932 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 53464 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:23:37.597313 sshd[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:37.601427 systemd-logind[1452]: New session 18 of user core. May 13 00:23:37.614304 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 00:23:37.827634 sshd[4064]: pam_unix(sshd:session): session closed for user core May 13 00:23:37.839107 systemd[1]: sshd@17-10.0.0.45:22-10.0.0.1:53464.service: Deactivated successfully. May 13 00:23:37.841015 systemd[1]: session-18.scope: Deactivated successfully. May 13 00:23:37.842502 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. May 13 00:23:37.850423 systemd[1]: Started sshd@18-10.0.0.45:22-10.0.0.1:53472.service - OpenSSH per-connection server daemon (10.0.0.1:53472). May 13 00:23:37.851452 systemd-logind[1452]: Removed session 18. May 13 00:23:37.882924 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 53472 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:23:37.884206 sshd[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:37.888403 systemd-logind[1452]: New session 19 of user core. May 13 00:23:37.897408 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 00:23:38.599172 sshd[4076]: pam_unix(sshd:session): session closed for user core May 13 00:23:38.609483 systemd[1]: sshd@18-10.0.0.45:22-10.0.0.1:53472.service: Deactivated successfully. May 13 00:23:38.612493 systemd[1]: session-19.scope: Deactivated successfully. May 13 00:23:38.614738 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. May 13 00:23:38.621798 systemd[1]: Started sshd@19-10.0.0.45:22-10.0.0.1:53488.service - OpenSSH per-connection server daemon (10.0.0.1:53488). May 13 00:23:38.622696 systemd-logind[1452]: Removed session 19. May 13 00:23:38.649494 sshd[4098]: Accepted publickey for core from 10.0.0.1 port 53488 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:23:38.650850 sshd[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:38.654413 systemd-logind[1452]: New session 20 of user core. May 13 00:23:38.666298 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 00:23:38.878251 sshd[4098]: pam_unix(sshd:session): session closed for user core May 13 00:23:38.886894 systemd[1]: sshd@19-10.0.0.45:22-10.0.0.1:53488.service: Deactivated successfully. May 13 00:23:38.888964 systemd[1]: session-20.scope: Deactivated successfully. May 13 00:23:38.890408 systemd-logind[1452]: Session 20 logged out. Waiting for processes to exit. May 13 00:23:38.897543 systemd[1]: Started sshd@20-10.0.0.45:22-10.0.0.1:53504.service - OpenSSH per-connection server daemon (10.0.0.1:53504). May 13 00:23:38.898403 systemd-logind[1452]: Removed session 20. May 13 00:23:38.925415 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 53504 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:23:38.926901 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:38.930859 systemd-logind[1452]: New session 21 of user core. May 13 00:23:38.937307 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 00:23:39.045385 sshd[4111]: pam_unix(sshd:session): session closed for user core May 13 00:23:39.049139 systemd[1]: sshd@20-10.0.0.45:22-10.0.0.1:53504.service: Deactivated successfully. May 13 00:23:39.051320 systemd[1]: session-21.scope: Deactivated successfully. May 13 00:23:39.052170 systemd-logind[1452]: Session 21 logged out. Waiting for processes to exit. May 13 00:23:39.053434 systemd-logind[1452]: Removed session 21. May 13 00:23:44.058040 systemd[1]: Started sshd@21-10.0.0.45:22-10.0.0.1:40940.service - OpenSSH per-connection server daemon (10.0.0.1:40940). May 13 00:23:44.091513 sshd[4127]: Accepted publickey for core from 10.0.0.1 port 40940 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:23:44.092996 sshd[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:44.096909 systemd-logind[1452]: New session 22 of user core. May 13 00:23:44.106324 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 00:23:44.215686 sshd[4127]: pam_unix(sshd:session): session closed for user core May 13 00:23:44.219358 systemd[1]: sshd@21-10.0.0.45:22-10.0.0.1:40940.service: Deactivated successfully. May 13 00:23:44.221094 systemd[1]: session-22.scope: Deactivated successfully. May 13 00:23:44.221770 systemd-logind[1452]: Session 22 logged out. Waiting for processes to exit. May 13 00:23:44.222753 systemd-logind[1452]: Removed session 22. May 13 00:23:49.231609 systemd[1]: Started sshd@22-10.0.0.45:22-10.0.0.1:40952.service - OpenSSH per-connection server daemon (10.0.0.1:40952). May 13 00:23:49.268856 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 40952 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:23:49.270663 sshd[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:49.274846 systemd-logind[1452]: New session 23 of user core. May 13 00:23:49.284319 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 00:23:49.391241 sshd[4146]: pam_unix(sshd:session): session closed for user core May 13 00:23:49.395413 systemd[1]: sshd@22-10.0.0.45:22-10.0.0.1:40952.service: Deactivated successfully. May 13 00:23:49.397060 systemd[1]: session-23.scope: Deactivated successfully. May 13 00:23:49.397615 systemd-logind[1452]: Session 23 logged out. Waiting for processes to exit. May 13 00:23:49.398473 systemd-logind[1452]: Removed session 23. May 13 00:23:54.405040 systemd[1]: Started sshd@23-10.0.0.45:22-10.0.0.1:56578.service - OpenSSH per-connection server daemon (10.0.0.1:56578). May 13 00:23:54.441398 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 56578 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:23:54.443210 sshd[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:54.447293 systemd-logind[1452]: New session 24 of user core. May 13 00:23:54.454403 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 00:23:54.560083 sshd[4164]: pam_unix(sshd:session): session closed for user core May 13 00:23:54.564149 systemd[1]: sshd@23-10.0.0.45:22-10.0.0.1:56578.service: Deactivated successfully. May 13 00:23:54.566044 systemd[1]: session-24.scope: Deactivated successfully. May 13 00:23:54.566649 systemd-logind[1452]: Session 24 logged out. Waiting for processes to exit. May 13 00:23:54.567455 systemd-logind[1452]: Removed session 24. May 13 00:23:59.570819 systemd[1]: Started sshd@24-10.0.0.45:22-10.0.0.1:56584.service - OpenSSH per-connection server daemon (10.0.0.1:56584). May 13 00:23:59.606488 sshd[4178]: Accepted publickey for core from 10.0.0.1 port 56584 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:23:59.608100 sshd[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:59.612014 systemd-logind[1452]: New session 25 of user core. May 13 00:23:59.623333 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 00:23:59.722773 sshd[4178]: pam_unix(sshd:session): session closed for user core May 13 00:23:59.735134 systemd[1]: sshd@24-10.0.0.45:22-10.0.0.1:56584.service: Deactivated successfully. May 13 00:23:59.736961 systemd[1]: session-25.scope: Deactivated successfully. May 13 00:23:59.738470 systemd-logind[1452]: Session 25 logged out. Waiting for processes to exit. May 13 00:23:59.745681 systemd[1]: Started sshd@25-10.0.0.45:22-10.0.0.1:56598.service - OpenSSH per-connection server daemon (10.0.0.1:56598). May 13 00:23:59.746735 systemd-logind[1452]: Removed session 25. May 13 00:23:59.773721 sshd[4192]: Accepted publickey for core from 10.0.0.1 port 56598 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:23:59.775206 sshd[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:59.778799 systemd-logind[1452]: New session 26 of user core. May 13 00:23:59.787291 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 00:24:01.107920 containerd[1466]: time="2025-05-13T00:24:01.107872647Z" level=info msg="StopContainer for \"392ec823a5d6f3ad49b383258de2958ab08f1eff200718656b3f7616d7d456a2\" with timeout 30 (s)" May 13 00:24:01.108710 containerd[1466]: time="2025-05-13T00:24:01.108692614Z" level=info msg="Stop container \"392ec823a5d6f3ad49b383258de2958ab08f1eff200718656b3f7616d7d456a2\" with signal terminated" May 13 00:24:01.117696 systemd[1]: run-containerd-runc-k8s.io-8e180a29c374837debc8da33e786489c94f744313c121584185ac3c670537a05-runc.FsrHoF.mount: Deactivated successfully. May 13 00:24:01.121039 systemd[1]: cri-containerd-392ec823a5d6f3ad49b383258de2958ab08f1eff200718656b3f7616d7d456a2.scope: Deactivated successfully. May 13 00:24:01.134415 containerd[1466]: time="2025-05-13T00:24:01.134356826Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:24:01.136403 containerd[1466]: time="2025-05-13T00:24:01.136373260Z" level=info msg="StopContainer for \"8e180a29c374837debc8da33e786489c94f744313c121584185ac3c670537a05\" with timeout 2 (s)" May 13 00:24:01.136604 containerd[1466]: time="2025-05-13T00:24:01.136586227Z" level=info msg="Stop container \"8e180a29c374837debc8da33e786489c94f744313c121584185ac3c670537a05\" with signal terminated" May 13 00:24:01.143509 systemd-networkd[1391]: lxc_health: Link DOWN May 13 00:24:01.143517 systemd-networkd[1391]: lxc_health: Lost carrier May 13 00:24:01.144214 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-392ec823a5d6f3ad49b383258de2958ab08f1eff200718656b3f7616d7d456a2-rootfs.mount: Deactivated successfully. May 13 00:24:01.153063 containerd[1466]: time="2025-05-13T00:24:01.152999587Z" level=info msg="shim disconnected" id=392ec823a5d6f3ad49b383258de2958ab08f1eff200718656b3f7616d7d456a2 namespace=k8s.io May 13 00:24:01.153063 containerd[1466]: time="2025-05-13T00:24:01.153052609Z" level=warning msg="cleaning up after shim disconnected" id=392ec823a5d6f3ad49b383258de2958ab08f1eff200718656b3f7616d7d456a2 namespace=k8s.io May 13 00:24:01.153063 containerd[1466]: time="2025-05-13T00:24:01.153060614Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:24:01.169454 containerd[1466]: time="2025-05-13T00:24:01.169414020Z" level=info msg="StopContainer for \"392ec823a5d6f3ad49b383258de2958ab08f1eff200718656b3f7616d7d456a2\" returns successfully" May 13 00:24:01.171605 systemd[1]: cri-containerd-8e180a29c374837debc8da33e786489c94f744313c121584185ac3c670537a05.scope: Deactivated successfully. May 13 00:24:01.172111 systemd[1]: cri-containerd-8e180a29c374837debc8da33e786489c94f744313c121584185ac3c670537a05.scope: Consumed 6.537s CPU time. May 13 00:24:01.174904 containerd[1466]: time="2025-05-13T00:24:01.174868795Z" level=info msg="StopPodSandbox for \"62e2067cfcf87511c9ccc9eb940a4cb9f62d12ed78f5fd5df0f8655a15df7fab\"" May 13 00:24:01.174967 containerd[1466]: time="2025-05-13T00:24:01.174921566Z" level=info msg="Container to stop \"392ec823a5d6f3ad49b383258de2958ab08f1eff200718656b3f7616d7d456a2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:24:01.176967 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-62e2067cfcf87511c9ccc9eb940a4cb9f62d12ed78f5fd5df0f8655a15df7fab-shm.mount: Deactivated successfully. May 13 00:24:01.183924 systemd[1]: cri-containerd-62e2067cfcf87511c9ccc9eb940a4cb9f62d12ed78f5fd5df0f8655a15df7fab.scope: Deactivated successfully. May 13 00:24:01.193172 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e180a29c374837debc8da33e786489c94f744313c121584185ac3c670537a05-rootfs.mount: Deactivated successfully. May 13 00:24:01.201260 containerd[1466]: time="2025-05-13T00:24:01.201167641Z" level=info msg="shim disconnected" id=8e180a29c374837debc8da33e786489c94f744313c121584185ac3c670537a05 namespace=k8s.io May 13 00:24:01.201260 containerd[1466]: time="2025-05-13T00:24:01.201255729Z" level=warning msg="cleaning up after shim disconnected" id=8e180a29c374837debc8da33e786489c94f744313c121584185ac3c670537a05 namespace=k8s.io May 13 00:24:01.201438 containerd[1466]: time="2025-05-13T00:24:01.201266810Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:24:01.207737 containerd[1466]: time="2025-05-13T00:24:01.207427465Z" level=info msg="shim disconnected" id=62e2067cfcf87511c9ccc9eb940a4cb9f62d12ed78f5fd5df0f8655a15df7fab namespace=k8s.io May 13 00:24:01.207737 containerd[1466]: time="2025-05-13T00:24:01.207478071Z" level=warning msg="cleaning up after shim disconnected" id=62e2067cfcf87511c9ccc9eb940a4cb9f62d12ed78f5fd5df0f8655a15df7fab namespace=k8s.io May 13 00:24:01.207737 containerd[1466]: time="2025-05-13T00:24:01.207492980Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:24:01.217533 containerd[1466]: time="2025-05-13T00:24:01.217495677Z" level=info msg="StopContainer for \"8e180a29c374837debc8da33e786489c94f744313c121584185ac3c670537a05\" returns successfully" May 13 00:24:01.218143 containerd[1466]: time="2025-05-13T00:24:01.218072961Z" level=info msg="StopPodSandbox for \"cf0e84e1f96f9fd65f45f1cecc5e33a2315e6089bbeb028b9a23300f5c2f9a1a\"" May 13 00:24:01.218143 containerd[1466]: time="2025-05-13T00:24:01.218102998Z" level=info msg="Container to stop \"1661b657cf51d35fe30446187d4a171bbe0b13607c2481d7c1dd0012de324a09\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:24:01.218143 containerd[1466]: time="2025-05-13T00:24:01.218115662Z" level=info msg="Container to stop \"b259896a997ce5925f1ee76893df10c97fb1df2195066362b3847255bc9f6b32\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:24:01.218143 containerd[1466]: time="2025-05-13T00:24:01.218124400Z" level=info msg="Container to stop \"0ad8bf4627de2f7d1ecb730333261671cac149391f0a83a0f376da8084e83e38\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:24:01.218143 containerd[1466]: time="2025-05-13T00:24:01.218132685Z" level=info msg="Container to stop \"8e180a29c374837debc8da33e786489c94f744313c121584185ac3c670537a05\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:24:01.218143 containerd[1466]: time="2025-05-13T00:24:01.218141271Z" level=info msg="Container to stop \"34264edb6a043349e258b84be77e7a3e9091eb82e0155ca28cb85e663d38fd3b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:24:01.224031 systemd[1]: cri-containerd-cf0e84e1f96f9fd65f45f1cecc5e33a2315e6089bbeb028b9a23300f5c2f9a1a.scope: Deactivated successfully. May 13 00:24:01.230147 containerd[1466]: time="2025-05-13T00:24:01.230101331Z" level=info msg="TearDown network for sandbox \"62e2067cfcf87511c9ccc9eb940a4cb9f62d12ed78f5fd5df0f8655a15df7fab\" successfully" May 13 00:24:01.230147 containerd[1466]: time="2025-05-13T00:24:01.230149252Z" level=info msg="StopPodSandbox for \"62e2067cfcf87511c9ccc9eb940a4cb9f62d12ed78f5fd5df0f8655a15df7fab\" returns successfully" May 13 00:24:01.243827 kubelet[2539]: I0513 00:24:01.243786 2539 scope.go:117] "RemoveContainer" containerID="392ec823a5d6f3ad49b383258de2958ab08f1eff200718656b3f7616d7d456a2" May 13 00:24:01.245409 containerd[1466]: time="2025-05-13T00:24:01.245321942Z" level=info msg="RemoveContainer for \"392ec823a5d6f3ad49b383258de2958ab08f1eff200718656b3f7616d7d456a2\"" May 13 00:24:01.252667 containerd[1466]: time="2025-05-13T00:24:01.252613578Z" level=info msg="shim disconnected" id=cf0e84e1f96f9fd65f45f1cecc5e33a2315e6089bbeb028b9a23300f5c2f9a1a namespace=k8s.io May 13 00:24:01.252667 containerd[1466]: time="2025-05-13T00:24:01.252662741Z" level=warning msg="cleaning up after shim disconnected" id=cf0e84e1f96f9fd65f45f1cecc5e33a2315e6089bbeb028b9a23300f5c2f9a1a namespace=k8s.io May 13 00:24:01.252852 containerd[1466]: time="2025-05-13T00:24:01.252673803Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:24:01.254470 containerd[1466]: time="2025-05-13T00:24:01.254445309Z" level=info msg="RemoveContainer for \"392ec823a5d6f3ad49b383258de2958ab08f1eff200718656b3f7616d7d456a2\" returns successfully" May 13 00:24:01.254705 kubelet[2539]: I0513 00:24:01.254679 2539 scope.go:117] "RemoveContainer" containerID="392ec823a5d6f3ad49b383258de2958ab08f1eff200718656b3f7616d7d456a2" May 13 00:24:01.257971 containerd[1466]: time="2025-05-13T00:24:01.257884272Z" level=error msg="ContainerStatus for \"392ec823a5d6f3ad49b383258de2958ab08f1eff200718656b3f7616d7d456a2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"392ec823a5d6f3ad49b383258de2958ab08f1eff200718656b3f7616d7d456a2\": not found" May 13 00:24:01.266225 kubelet[2539]: E0513 00:24:01.266168 2539 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"392ec823a5d6f3ad49b383258de2958ab08f1eff200718656b3f7616d7d456a2\": not found" containerID="392ec823a5d6f3ad49b383258de2958ab08f1eff200718656b3f7616d7d456a2" May 13 00:24:01.266329 kubelet[2539]: I0513 00:24:01.266238 2539 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"392ec823a5d6f3ad49b383258de2958ab08f1eff200718656b3f7616d7d456a2"} err="failed to get container status \"392ec823a5d6f3ad49b383258de2958ab08f1eff200718656b3f7616d7d456a2\": rpc error: code = NotFound desc = an error occurred when try to find container \"392ec823a5d6f3ad49b383258de2958ab08f1eff200718656b3f7616d7d456a2\": not found" May 13 00:24:01.266632 containerd[1466]: time="2025-05-13T00:24:01.266606782Z" level=info msg="TearDown network for sandbox \"cf0e84e1f96f9fd65f45f1cecc5e33a2315e6089bbeb028b9a23300f5c2f9a1a\" successfully" May 13 00:24:01.266632 containerd[1466]: time="2025-05-13T00:24:01.266627743Z" level=info msg="StopPodSandbox for \"cf0e84e1f96f9fd65f45f1cecc5e33a2315e6089bbeb028b9a23300f5c2f9a1a\" returns successfully" May 13 00:24:01.286046 kubelet[2539]: I0513 00:24:01.285964 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41345458-9fde-43c2-bdf4-802f9ddb2c96-cilium-config-path\") pod \"41345458-9fde-43c2-bdf4-802f9ddb2c96\" (UID: \"41345458-9fde-43c2-bdf4-802f9ddb2c96\") " May 13 00:24:01.286046 kubelet[2539]: I0513 00:24:01.286004 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tktkq\" (UniqueName: \"kubernetes.io/projected/41345458-9fde-43c2-bdf4-802f9ddb2c96-kube-api-access-tktkq\") pod \"41345458-9fde-43c2-bdf4-802f9ddb2c96\" (UID: \"41345458-9fde-43c2-bdf4-802f9ddb2c96\") " May 13 00:24:01.289337 kubelet[2539]: I0513 00:24:01.289284 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41345458-9fde-43c2-bdf4-802f9ddb2c96-kube-api-access-tktkq" (OuterVolumeSpecName: "kube-api-access-tktkq") pod "41345458-9fde-43c2-bdf4-802f9ddb2c96" (UID: "41345458-9fde-43c2-bdf4-802f9ddb2c96"). InnerVolumeSpecName "kube-api-access-tktkq". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 00:24:01.289709 kubelet[2539]: I0513 00:24:01.289678 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41345458-9fde-43c2-bdf4-802f9ddb2c96-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "41345458-9fde-43c2-bdf4-802f9ddb2c96" (UID: "41345458-9fde-43c2-bdf4-802f9ddb2c96"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 00:24:01.387030 kubelet[2539]: I0513 00:24:01.386983 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-cni-path\") pod \"758e91be-2be5-424c-bf4e-391282fb8246\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " May 13 00:24:01.387223 kubelet[2539]: I0513 00:24:01.387046 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/758e91be-2be5-424c-bf4e-391282fb8246-cilium-config-path\") pod \"758e91be-2be5-424c-bf4e-391282fb8246\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " May 13 00:24:01.387223 kubelet[2539]: I0513 00:24:01.387066 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-host-proc-sys-kernel\") pod \"758e91be-2be5-424c-bf4e-391282fb8246\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " May 13 00:24:01.387223 kubelet[2539]: I0513 00:24:01.387085 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-hostproc\") pod \"758e91be-2be5-424c-bf4e-391282fb8246\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " May 13 00:24:01.387223 kubelet[2539]: I0513 00:24:01.387102 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-cilium-cgroup\") pod \"758e91be-2be5-424c-bf4e-391282fb8246\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " May 13 00:24:01.387223 kubelet[2539]: I0513 00:24:01.387091 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-cni-path" (OuterVolumeSpecName: "cni-path") pod "758e91be-2be5-424c-bf4e-391282fb8246" (UID: "758e91be-2be5-424c-bf4e-391282fb8246"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:24:01.387538 kubelet[2539]: I0513 00:24:01.387146 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "758e91be-2be5-424c-bf4e-391282fb8246" (UID: "758e91be-2be5-424c-bf4e-391282fb8246"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:24:01.387538 kubelet[2539]: I0513 00:24:01.387118 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-host-proc-sys-net\") pod \"758e91be-2be5-424c-bf4e-391282fb8246\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " May 13 00:24:01.387538 kubelet[2539]: I0513 00:24:01.387147 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "758e91be-2be5-424c-bf4e-391282fb8246" (UID: "758e91be-2be5-424c-bf4e-391282fb8246"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:24:01.387538 kubelet[2539]: I0513 00:24:01.387218 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-hostproc" (OuterVolumeSpecName: "hostproc") pod "758e91be-2be5-424c-bf4e-391282fb8246" (UID: "758e91be-2be5-424c-bf4e-391282fb8246"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:24:01.387538 kubelet[2539]: I0513 00:24:01.387230 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-bpf-maps\") pod \"758e91be-2be5-424c-bf4e-391282fb8246\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " May 13 00:24:01.387692 kubelet[2539]: I0513 00:24:01.387241 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "758e91be-2be5-424c-bf4e-391282fb8246" (UID: "758e91be-2be5-424c-bf4e-391282fb8246"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:24:01.387692 kubelet[2539]: I0513 00:24:01.387263 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/758e91be-2be5-424c-bf4e-391282fb8246-hubble-tls\") pod \"758e91be-2be5-424c-bf4e-391282fb8246\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " May 13 00:24:01.387692 kubelet[2539]: I0513 00:24:01.387283 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4drck\" (UniqueName: \"kubernetes.io/projected/758e91be-2be5-424c-bf4e-391282fb8246-kube-api-access-4drck\") pod \"758e91be-2be5-424c-bf4e-391282fb8246\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " May 13 00:24:01.387692 kubelet[2539]: I0513 00:24:01.387305 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-cilium-run\") pod \"758e91be-2be5-424c-bf4e-391282fb8246\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " May 13 00:24:01.387692 kubelet[2539]: I0513 00:24:01.387319 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-etc-cni-netd\") pod \"758e91be-2be5-424c-bf4e-391282fb8246\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " May 13 00:24:01.387692 kubelet[2539]: I0513 00:24:01.387338 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-lib-modules\") pod \"758e91be-2be5-424c-bf4e-391282fb8246\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " May 13 00:24:01.387898 kubelet[2539]: I0513 00:24:01.387354 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-xtables-lock\") pod \"758e91be-2be5-424c-bf4e-391282fb8246\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " May 13 00:24:01.387898 kubelet[2539]: I0513 00:24:01.387374 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/758e91be-2be5-424c-bf4e-391282fb8246-clustermesh-secrets\") pod \"758e91be-2be5-424c-bf4e-391282fb8246\" (UID: \"758e91be-2be5-424c-bf4e-391282fb8246\") " May 13 00:24:01.387898 kubelet[2539]: I0513 00:24:01.387424 2539 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41345458-9fde-43c2-bdf4-802f9ddb2c96-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:24:01.387898 kubelet[2539]: I0513 00:24:01.387433 2539 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 00:24:01.387898 kubelet[2539]: I0513 00:24:01.387441 2539 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 00:24:01.387898 kubelet[2539]: I0513 00:24:01.387453 2539 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 00:24:01.387898 kubelet[2539]: I0513 00:24:01.387462 2539 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 00:24:01.387898 kubelet[2539]: I0513 00:24:01.387471 2539 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tktkq\" (UniqueName: \"kubernetes.io/projected/41345458-9fde-43c2-bdf4-802f9ddb2c96-kube-api-access-tktkq\") on node \"localhost\" DevicePath \"\"" May 13 00:24:01.388495 kubelet[2539]: I0513 00:24:01.388200 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "758e91be-2be5-424c-bf4e-391282fb8246" (UID: "758e91be-2be5-424c-bf4e-391282fb8246"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:24:01.390331 kubelet[2539]: I0513 00:24:01.390302 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "758e91be-2be5-424c-bf4e-391282fb8246" (UID: "758e91be-2be5-424c-bf4e-391282fb8246"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:24:01.390400 kubelet[2539]: I0513 00:24:01.390337 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "758e91be-2be5-424c-bf4e-391282fb8246" (UID: "758e91be-2be5-424c-bf4e-391282fb8246"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:24:01.390400 kubelet[2539]: I0513 00:24:01.390361 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "758e91be-2be5-424c-bf4e-391282fb8246" (UID: "758e91be-2be5-424c-bf4e-391282fb8246"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:24:01.390400 kubelet[2539]: I0513 00:24:01.390376 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "758e91be-2be5-424c-bf4e-391282fb8246" (UID: "758e91be-2be5-424c-bf4e-391282fb8246"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:24:01.390752 kubelet[2539]: I0513 00:24:01.390731 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/758e91be-2be5-424c-bf4e-391282fb8246-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "758e91be-2be5-424c-bf4e-391282fb8246" (UID: "758e91be-2be5-424c-bf4e-391282fb8246"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 00:24:01.390979 kubelet[2539]: I0513 00:24:01.390944 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/758e91be-2be5-424c-bf4e-391282fb8246-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "758e91be-2be5-424c-bf4e-391282fb8246" (UID: "758e91be-2be5-424c-bf4e-391282fb8246"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 00:24:01.391073 kubelet[2539]: I0513 00:24:01.391052 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/758e91be-2be5-424c-bf4e-391282fb8246-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "758e91be-2be5-424c-bf4e-391282fb8246" (UID: "758e91be-2be5-424c-bf4e-391282fb8246"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 00:24:01.391590 kubelet[2539]: I0513 00:24:01.391549 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/758e91be-2be5-424c-bf4e-391282fb8246-kube-api-access-4drck" (OuterVolumeSpecName: "kube-api-access-4drck") pod "758e91be-2be5-424c-bf4e-391282fb8246" (UID: "758e91be-2be5-424c-bf4e-391282fb8246"). InnerVolumeSpecName "kube-api-access-4drck". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 00:24:01.487965 kubelet[2539]: I0513 00:24:01.487912 2539 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4drck\" (UniqueName: \"kubernetes.io/projected/758e91be-2be5-424c-bf4e-391282fb8246-kube-api-access-4drck\") on node \"localhost\" DevicePath \"\"" May 13 00:24:01.487965 kubelet[2539]: I0513 00:24:01.487949 2539 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/758e91be-2be5-424c-bf4e-391282fb8246-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 00:24:01.487965 kubelet[2539]: I0513 00:24:01.487960 2539 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 00:24:01.487965 kubelet[2539]: I0513 00:24:01.487971 2539 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 00:24:01.488223 kubelet[2539]: I0513 00:24:01.487981 2539 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 00:24:01.488223 kubelet[2539]: I0513 00:24:01.487992 2539 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 00:24:01.488223 kubelet[2539]: I0513 00:24:01.488004 2539 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/758e91be-2be5-424c-bf4e-391282fb8246-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 00:24:01.488223 kubelet[2539]: I0513 00:24:01.488014 2539 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/758e91be-2be5-424c-bf4e-391282fb8246-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:24:01.488223 kubelet[2539]: I0513 00:24:01.488023 2539 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 00:24:01.488223 kubelet[2539]: I0513 00:24:01.488033 2539 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/758e91be-2be5-424c-bf4e-391282fb8246-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 00:24:01.549718 systemd[1]: Removed slice kubepods-besteffort-pod41345458_9fde_43c2_bdf4_802f9ddb2c96.slice - libcontainer container kubepods-besteffort-pod41345458_9fde_43c2_bdf4_802f9ddb2c96.slice. May 13 00:24:02.034793 kubelet[2539]: I0513 00:24:02.034741 2539 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41345458-9fde-43c2-bdf4-802f9ddb2c96" path="/var/lib/kubelet/pods/41345458-9fde-43c2-bdf4-802f9ddb2c96/volumes" May 13 00:24:02.041527 systemd[1]: Removed slice kubepods-burstable-pod758e91be_2be5_424c_bf4e_391282fb8246.slice - libcontainer container kubepods-burstable-pod758e91be_2be5_424c_bf4e_391282fb8246.slice. May 13 00:24:02.041743 systemd[1]: kubepods-burstable-pod758e91be_2be5_424c_bf4e_391282fb8246.slice: Consumed 6.635s CPU time. May 13 00:24:02.113392 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62e2067cfcf87511c9ccc9eb940a4cb9f62d12ed78f5fd5df0f8655a15df7fab-rootfs.mount: Deactivated successfully. May 13 00:24:02.113532 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf0e84e1f96f9fd65f45f1cecc5e33a2315e6089bbeb028b9a23300f5c2f9a1a-rootfs.mount: Deactivated successfully. May 13 00:24:02.113626 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cf0e84e1f96f9fd65f45f1cecc5e33a2315e6089bbeb028b9a23300f5c2f9a1a-shm.mount: Deactivated successfully. May 13 00:24:02.113738 systemd[1]: var-lib-kubelet-pods-41345458\x2d9fde\x2d43c2\x2dbdf4\x2d802f9ddb2c96-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtktkq.mount: Deactivated successfully. May 13 00:24:02.113857 systemd[1]: var-lib-kubelet-pods-758e91be\x2d2be5\x2d424c\x2dbf4e\x2d391282fb8246-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4drck.mount: Deactivated successfully. May 13 00:24:02.113968 systemd[1]: var-lib-kubelet-pods-758e91be\x2d2be5\x2d424c\x2dbf4e\x2d391282fb8246-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 00:24:02.114072 systemd[1]: var-lib-kubelet-pods-758e91be\x2d2be5\x2d424c\x2dbf4e\x2d391282fb8246-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 00:24:02.252394 kubelet[2539]: I0513 00:24:02.252359 2539 scope.go:117] "RemoveContainer" containerID="8e180a29c374837debc8da33e786489c94f744313c121584185ac3c670537a05" May 13 00:24:02.254999 containerd[1466]: time="2025-05-13T00:24:02.254723185Z" level=info msg="RemoveContainer for \"8e180a29c374837debc8da33e786489c94f744313c121584185ac3c670537a05\"" May 13 00:24:02.258920 containerd[1466]: time="2025-05-13T00:24:02.258884375Z" level=info msg="RemoveContainer for \"8e180a29c374837debc8da33e786489c94f744313c121584185ac3c670537a05\" returns successfully" May 13 00:24:02.259151 kubelet[2539]: I0513 00:24:02.259120 2539 scope.go:117] "RemoveContainer" containerID="b259896a997ce5925f1ee76893df10c97fb1df2195066362b3847255bc9f6b32" May 13 00:24:02.260326 containerd[1466]: time="2025-05-13T00:24:02.260284970Z" level=info msg="RemoveContainer for \"b259896a997ce5925f1ee76893df10c97fb1df2195066362b3847255bc9f6b32\"" May 13 00:24:02.263990 containerd[1466]: time="2025-05-13T00:24:02.263915455Z" level=info msg="RemoveContainer for \"b259896a997ce5925f1ee76893df10c97fb1df2195066362b3847255bc9f6b32\" returns successfully" May 13 00:24:02.264199 kubelet[2539]: I0513 00:24:02.264151 2539 scope.go:117] "RemoveContainer" containerID="0ad8bf4627de2f7d1ecb730333261671cac149391f0a83a0f376da8084e83e38" May 13 00:24:02.265890 containerd[1466]: time="2025-05-13T00:24:02.265858969Z" level=info msg="RemoveContainer for \"0ad8bf4627de2f7d1ecb730333261671cac149391f0a83a0f376da8084e83e38\"" May 13 00:24:02.272715 containerd[1466]: time="2025-05-13T00:24:02.272666122Z" level=info msg="RemoveContainer for \"0ad8bf4627de2f7d1ecb730333261671cac149391f0a83a0f376da8084e83e38\" returns successfully" May 13 00:24:02.272911 kubelet[2539]: I0513 00:24:02.272887 2539 scope.go:117] "RemoveContainer" containerID="1661b657cf51d35fe30446187d4a171bbe0b13607c2481d7c1dd0012de324a09" May 13 00:24:02.274003 containerd[1466]: time="2025-05-13T00:24:02.273968521Z" level=info msg="RemoveContainer for \"1661b657cf51d35fe30446187d4a171bbe0b13607c2481d7c1dd0012de324a09\"" May 13 00:24:02.277107 containerd[1466]: time="2025-05-13T00:24:02.277073223Z" level=info msg="RemoveContainer for \"1661b657cf51d35fe30446187d4a171bbe0b13607c2481d7c1dd0012de324a09\" returns successfully" May 13 00:24:02.277339 kubelet[2539]: I0513 00:24:02.277245 2539 scope.go:117] "RemoveContainer" containerID="34264edb6a043349e258b84be77e7a3e9091eb82e0155ca28cb85e663d38fd3b" May 13 00:24:02.280638 containerd[1466]: time="2025-05-13T00:24:02.278206988Z" level=info msg="RemoveContainer for \"34264edb6a043349e258b84be77e7a3e9091eb82e0155ca28cb85e663d38fd3b\"" May 13 00:24:02.281765 containerd[1466]: time="2025-05-13T00:24:02.281728195Z" level=info msg="RemoveContainer for \"34264edb6a043349e258b84be77e7a3e9091eb82e0155ca28cb85e663d38fd3b\" returns successfully" May 13 00:24:03.077309 sshd[4192]: pam_unix(sshd:session): session closed for user core May 13 00:24:03.089025 systemd[1]: sshd@25-10.0.0.45:22-10.0.0.1:56598.service: Deactivated successfully. May 13 00:24:03.090666 systemd[1]: session-26.scope: Deactivated successfully. May 13 00:24:03.092318 systemd-logind[1452]: Session 26 logged out. Waiting for processes to exit. May 13 00:24:03.098447 systemd[1]: Started sshd@26-10.0.0.45:22-10.0.0.1:56600.service - OpenSSH per-connection server daemon (10.0.0.1:56600). May 13 00:24:03.099454 systemd-logind[1452]: Removed session 26. May 13 00:24:03.130763 sshd[4353]: Accepted publickey for core from 10.0.0.1 port 56600 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:24:03.132355 sshd[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:24:03.136257 systemd-logind[1452]: New session 27 of user core. May 13 00:24:03.146308 systemd[1]: Started session-27.scope - Session 27 of User core. May 13 00:24:03.844258 sshd[4353]: pam_unix(sshd:session): session closed for user core May 13 00:24:03.860625 systemd[1]: sshd@26-10.0.0.45:22-10.0.0.1:56600.service: Deactivated successfully. May 13 00:24:03.864573 systemd[1]: session-27.scope: Deactivated successfully. May 13 00:24:03.868149 systemd-logind[1452]: Session 27 logged out. Waiting for processes to exit. May 13 00:24:03.879572 kubelet[2539]: I0513 00:24:03.879545 2539 memory_manager.go:355] "RemoveStaleState removing state" podUID="758e91be-2be5-424c-bf4e-391282fb8246" containerName="cilium-agent" May 13 00:24:03.879572 kubelet[2539]: I0513 00:24:03.879570 2539 memory_manager.go:355] "RemoveStaleState removing state" podUID="41345458-9fde-43c2-bdf4-802f9ddb2c96" containerName="cilium-operator" May 13 00:24:03.880334 systemd[1]: Started sshd@27-10.0.0.45:22-10.0.0.1:38966.service - OpenSSH per-connection server daemon (10.0.0.1:38966). May 13 00:24:03.885301 systemd-logind[1452]: Removed session 27. May 13 00:24:03.901233 systemd[1]: Created slice kubepods-burstable-podb3c091a3_2aa2_45b2_a5eb_3298f7e6a125.slice - libcontainer container kubepods-burstable-podb3c091a3_2aa2_45b2_a5eb_3298f7e6a125.slice. May 13 00:24:03.926228 sshd[4366]: Accepted publickey for core from 10.0.0.1 port 38966 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:24:03.927686 sshd[4366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:24:03.931414 systemd-logind[1452]: New session 28 of user core. May 13 00:24:03.941308 systemd[1]: Started session-28.scope - Session 28 of User core. May 13 00:24:03.992215 sshd[4366]: pam_unix(sshd:session): session closed for user core May 13 00:24:04.002103 systemd[1]: sshd@27-10.0.0.45:22-10.0.0.1:38966.service: Deactivated successfully. May 13 00:24:04.002492 kubelet[2539]: I0513 00:24:04.002467 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b3c091a3-2aa2-45b2-a5eb-3298f7e6a125-cilium-config-path\") pod \"cilium-ff5td\" (UID: \"b3c091a3-2aa2-45b2-a5eb-3298f7e6a125\") " pod="kube-system/cilium-ff5td" May 13 00:24:04.002552 kubelet[2539]: I0513 00:24:04.002500 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b3c091a3-2aa2-45b2-a5eb-3298f7e6a125-cni-path\") pod \"cilium-ff5td\" (UID: \"b3c091a3-2aa2-45b2-a5eb-3298f7e6a125\") " pod="kube-system/cilium-ff5td" May 13 00:24:04.002552 kubelet[2539]: I0513 00:24:04.002518 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b3c091a3-2aa2-45b2-a5eb-3298f7e6a125-cilium-ipsec-secrets\") pod \"cilium-ff5td\" (UID: \"b3c091a3-2aa2-45b2-a5eb-3298f7e6a125\") " pod="kube-system/cilium-ff5td" May 13 00:24:04.002552 kubelet[2539]: I0513 00:24:04.002533 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b3c091a3-2aa2-45b2-a5eb-3298f7e6a125-hostproc\") pod \"cilium-ff5td\" (UID: \"b3c091a3-2aa2-45b2-a5eb-3298f7e6a125\") " pod="kube-system/cilium-ff5td" May 13 00:24:04.002650 kubelet[2539]: I0513 00:24:04.002575 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3c091a3-2aa2-45b2-a5eb-3298f7e6a125-lib-modules\") pod \"cilium-ff5td\" (UID: \"b3c091a3-2aa2-45b2-a5eb-3298f7e6a125\") " pod="kube-system/cilium-ff5td" May 13 00:24:04.002650 kubelet[2539]: I0513 00:24:04.002601 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3c091a3-2aa2-45b2-a5eb-3298f7e6a125-etc-cni-netd\") pod \"cilium-ff5td\" (UID: \"b3c091a3-2aa2-45b2-a5eb-3298f7e6a125\") " pod="kube-system/cilium-ff5td" May 13 00:24:04.002650 kubelet[2539]: I0513 00:24:04.002638 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b3c091a3-2aa2-45b2-a5eb-3298f7e6a125-bpf-maps\") pod \"cilium-ff5td\" (UID: \"b3c091a3-2aa2-45b2-a5eb-3298f7e6a125\") " pod="kube-system/cilium-ff5td" May 13 00:24:04.002722 kubelet[2539]: I0513 00:24:04.002666 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b3c091a3-2aa2-45b2-a5eb-3298f7e6a125-cilium-cgroup\") pod \"cilium-ff5td\" (UID: \"b3c091a3-2aa2-45b2-a5eb-3298f7e6a125\") " pod="kube-system/cilium-ff5td" May 13 00:24:04.002722 kubelet[2539]: I0513 00:24:04.002686 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b3c091a3-2aa2-45b2-a5eb-3298f7e6a125-host-proc-sys-kernel\") pod \"cilium-ff5td\" (UID: \"b3c091a3-2aa2-45b2-a5eb-3298f7e6a125\") " pod="kube-system/cilium-ff5td" May 13 00:24:04.002722 kubelet[2539]: I0513 00:24:04.002717 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b3c091a3-2aa2-45b2-a5eb-3298f7e6a125-clustermesh-secrets\") pod \"cilium-ff5td\" (UID: \"b3c091a3-2aa2-45b2-a5eb-3298f7e6a125\") " pod="kube-system/cilium-ff5td" May 13 00:24:04.002793 kubelet[2539]: I0513 00:24:04.002737 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b3c091a3-2aa2-45b2-a5eb-3298f7e6a125-host-proc-sys-net\") pod \"cilium-ff5td\" (UID: \"b3c091a3-2aa2-45b2-a5eb-3298f7e6a125\") " pod="kube-system/cilium-ff5td" May 13 00:24:04.002793 kubelet[2539]: I0513 00:24:04.002756 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j77t\" (UniqueName: \"kubernetes.io/projected/b3c091a3-2aa2-45b2-a5eb-3298f7e6a125-kube-api-access-6j77t\") pod \"cilium-ff5td\" (UID: \"b3c091a3-2aa2-45b2-a5eb-3298f7e6a125\") " pod="kube-system/cilium-ff5td" May 13 00:24:04.002844 kubelet[2539]: I0513 00:24:04.002798 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3c091a3-2aa2-45b2-a5eb-3298f7e6a125-xtables-lock\") pod \"cilium-ff5td\" (UID: \"b3c091a3-2aa2-45b2-a5eb-3298f7e6a125\") " pod="kube-system/cilium-ff5td" May 13 00:24:04.002844 kubelet[2539]: I0513 00:24:04.002830 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b3c091a3-2aa2-45b2-a5eb-3298f7e6a125-hubble-tls\") pod \"cilium-ff5td\" (UID: \"b3c091a3-2aa2-45b2-a5eb-3298f7e6a125\") " pod="kube-system/cilium-ff5td" May 13 00:24:04.002895 kubelet[2539]: I0513 00:24:04.002867 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b3c091a3-2aa2-45b2-a5eb-3298f7e6a125-cilium-run\") pod \"cilium-ff5td\" (UID: \"b3c091a3-2aa2-45b2-a5eb-3298f7e6a125\") " pod="kube-system/cilium-ff5td" May 13 00:24:04.004035 systemd[1]: session-28.scope: Deactivated successfully. May 13 00:24:04.005562 systemd-logind[1452]: Session 28 logged out. Waiting for processes to exit. May 13 00:24:04.028446 systemd[1]: Started sshd@28-10.0.0.45:22-10.0.0.1:38978.service - OpenSSH per-connection server daemon (10.0.0.1:38978). May 13 00:24:04.029360 systemd-logind[1452]: Removed session 28. May 13 00:24:04.034683 kubelet[2539]: I0513 00:24:04.034642 2539 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="758e91be-2be5-424c-bf4e-391282fb8246" path="/var/lib/kubelet/pods/758e91be-2be5-424c-bf4e-391282fb8246/volumes" May 13 00:24:04.058018 sshd[4374]: Accepted publickey for core from 10.0.0.1 port 38978 ssh2: RSA SHA256:B4t1mGmM++usqbQmruue/FcXVPBtYThSLbULgD82Hos May 13 00:24:04.059416 sshd[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:24:04.063208 systemd-logind[1452]: New session 29 of user core. May 13 00:24:04.074276 systemd[1]: Started session-29.scope - Session 29 of User core. May 13 00:24:04.205617 kubelet[2539]: E0513 00:24:04.205582 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:04.206258 containerd[1466]: time="2025-05-13T00:24:04.206122475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ff5td,Uid:b3c091a3-2aa2-45b2-a5eb-3298f7e6a125,Namespace:kube-system,Attempt:0,}" May 13 00:24:04.228335 containerd[1466]: time="2025-05-13T00:24:04.228234419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:24:04.228335 containerd[1466]: time="2025-05-13T00:24:04.228286589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:24:04.228335 containerd[1466]: time="2025-05-13T00:24:04.228299283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:24:04.228491 containerd[1466]: time="2025-05-13T00:24:04.228387782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:24:04.250368 systemd[1]: Started cri-containerd-25c3fb99c7ff5a09884b6792c1a10d77e2bad552fa7b0813d8c3c5f1111900cc.scope - libcontainer container 25c3fb99c7ff5a09884b6792c1a10d77e2bad552fa7b0813d8c3c5f1111900cc. May 13 00:24:04.274639 containerd[1466]: time="2025-05-13T00:24:04.274600989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ff5td,Uid:b3c091a3-2aa2-45b2-a5eb-3298f7e6a125,Namespace:kube-system,Attempt:0,} returns sandbox id \"25c3fb99c7ff5a09884b6792c1a10d77e2bad552fa7b0813d8c3c5f1111900cc\"" May 13 00:24:04.275375 kubelet[2539]: E0513 00:24:04.275330 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:04.278058 containerd[1466]: time="2025-05-13T00:24:04.278012902Z" level=info msg="CreateContainer within sandbox \"25c3fb99c7ff5a09884b6792c1a10d77e2bad552fa7b0813d8c3c5f1111900cc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:24:04.312034 containerd[1466]: time="2025-05-13T00:24:04.311951661Z" level=info msg="CreateContainer within sandbox \"25c3fb99c7ff5a09884b6792c1a10d77e2bad552fa7b0813d8c3c5f1111900cc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1e9c7342893979cd178147347dd2e6bce0785e868eff8acbdc71fb77de84261d\"" May 13 00:24:04.312397 containerd[1466]: time="2025-05-13T00:24:04.312370941Z" level=info msg="StartContainer for \"1e9c7342893979cd178147347dd2e6bce0785e868eff8acbdc71fb77de84261d\"" May 13 00:24:04.340308 systemd[1]: Started cri-containerd-1e9c7342893979cd178147347dd2e6bce0785e868eff8acbdc71fb77de84261d.scope - libcontainer container 1e9c7342893979cd178147347dd2e6bce0785e868eff8acbdc71fb77de84261d. May 13 00:24:04.366100 containerd[1466]: time="2025-05-13T00:24:04.366038716Z" level=info msg="StartContainer for \"1e9c7342893979cd178147347dd2e6bce0785e868eff8acbdc71fb77de84261d\" returns successfully" May 13 00:24:04.376355 systemd[1]: cri-containerd-1e9c7342893979cd178147347dd2e6bce0785e868eff8acbdc71fb77de84261d.scope: Deactivated successfully. May 13 00:24:04.405661 containerd[1466]: time="2025-05-13T00:24:04.405535794Z" level=info msg="shim disconnected" id=1e9c7342893979cd178147347dd2e6bce0785e868eff8acbdc71fb77de84261d namespace=k8s.io May 13 00:24:04.405661 containerd[1466]: time="2025-05-13T00:24:04.405591160Z" level=warning msg="cleaning up after shim disconnected" id=1e9c7342893979cd178147347dd2e6bce0785e868eff8acbdc71fb77de84261d namespace=k8s.io May 13 00:24:04.405661 containerd[1466]: time="2025-05-13T00:24:04.405599936Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:24:05.260753 kubelet[2539]: E0513 00:24:05.260717 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:05.262784 containerd[1466]: time="2025-05-13T00:24:05.262732903Z" level=info msg="CreateContainer within sandbox \"25c3fb99c7ff5a09884b6792c1a10d77e2bad552fa7b0813d8c3c5f1111900cc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:24:05.277303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount622999539.mount: Deactivated successfully. May 13 00:24:05.297399 containerd[1466]: time="2025-05-13T00:24:05.297343497Z" level=info msg="CreateContainer within sandbox \"25c3fb99c7ff5a09884b6792c1a10d77e2bad552fa7b0813d8c3c5f1111900cc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1b743219462abb8c058afe6ff4cddf01ad052feb599eb69e118ff092ac0c2e13\"" May 13 00:24:05.297891 containerd[1466]: time="2025-05-13T00:24:05.297869009Z" level=info msg="StartContainer for \"1b743219462abb8c058afe6ff4cddf01ad052feb599eb69e118ff092ac0c2e13\"" May 13 00:24:05.330324 systemd[1]: Started cri-containerd-1b743219462abb8c058afe6ff4cddf01ad052feb599eb69e118ff092ac0c2e13.scope - libcontainer container 1b743219462abb8c058afe6ff4cddf01ad052feb599eb69e118ff092ac0c2e13. May 13 00:24:05.354941 containerd[1466]: time="2025-05-13T00:24:05.354888907Z" level=info msg="StartContainer for \"1b743219462abb8c058afe6ff4cddf01ad052feb599eb69e118ff092ac0c2e13\" returns successfully" May 13 00:24:05.362046 systemd[1]: cri-containerd-1b743219462abb8c058afe6ff4cddf01ad052feb599eb69e118ff092ac0c2e13.scope: Deactivated successfully. May 13 00:24:05.384654 containerd[1466]: time="2025-05-13T00:24:05.384590545Z" level=info msg="shim disconnected" id=1b743219462abb8c058afe6ff4cddf01ad052feb599eb69e118ff092ac0c2e13 namespace=k8s.io May 13 00:24:05.384654 containerd[1466]: time="2025-05-13T00:24:05.384648736Z" level=warning msg="cleaning up after shim disconnected" id=1b743219462abb8c058afe6ff4cddf01ad052feb599eb69e118ff092ac0c2e13 namespace=k8s.io May 13 00:24:05.384880 containerd[1466]: time="2025-05-13T00:24:05.384661189Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:24:06.090430 kubelet[2539]: E0513 00:24:06.090393 2539 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 00:24:06.109626 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b743219462abb8c058afe6ff4cddf01ad052feb599eb69e118ff092ac0c2e13-rootfs.mount: Deactivated successfully. May 13 00:24:06.263972 kubelet[2539]: E0513 00:24:06.263932 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:06.265822 containerd[1466]: time="2025-05-13T00:24:06.265760775Z" level=info msg="CreateContainer within sandbox \"25c3fb99c7ff5a09884b6792c1a10d77e2bad552fa7b0813d8c3c5f1111900cc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:24:06.287413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2109519030.mount: Deactivated successfully. May 13 00:24:06.293622 containerd[1466]: time="2025-05-13T00:24:06.293575340Z" level=info msg="CreateContainer within sandbox \"25c3fb99c7ff5a09884b6792c1a10d77e2bad552fa7b0813d8c3c5f1111900cc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"441eb957df4af4b736d0747570ce95c3c7156d2f2bb0a3b586a50796821676e6\"" May 13 00:24:06.294241 containerd[1466]: time="2025-05-13T00:24:06.294215511Z" level=info msg="StartContainer for \"441eb957df4af4b736d0747570ce95c3c7156d2f2bb0a3b586a50796821676e6\"" May 13 00:24:06.324373 systemd[1]: Started cri-containerd-441eb957df4af4b736d0747570ce95c3c7156d2f2bb0a3b586a50796821676e6.scope - libcontainer container 441eb957df4af4b736d0747570ce95c3c7156d2f2bb0a3b586a50796821676e6. May 13 00:24:06.355969 systemd[1]: cri-containerd-441eb957df4af4b736d0747570ce95c3c7156d2f2bb0a3b586a50796821676e6.scope: Deactivated successfully. May 13 00:24:06.432290 containerd[1466]: time="2025-05-13T00:24:06.432222504Z" level=info msg="StartContainer for \"441eb957df4af4b736d0747570ce95c3c7156d2f2bb0a3b586a50796821676e6\" returns successfully" May 13 00:24:06.480753 containerd[1466]: time="2025-05-13T00:24:06.480685710Z" level=info msg="shim disconnected" id=441eb957df4af4b736d0747570ce95c3c7156d2f2bb0a3b586a50796821676e6 namespace=k8s.io May 13 00:24:06.480753 containerd[1466]: time="2025-05-13T00:24:06.480744181Z" level=warning msg="cleaning up after shim disconnected" id=441eb957df4af4b736d0747570ce95c3c7156d2f2bb0a3b586a50796821676e6 namespace=k8s.io May 13 00:24:06.480753 containerd[1466]: time="2025-05-13T00:24:06.480752567Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:24:07.032456 kubelet[2539]: E0513 00:24:07.032419 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:07.108858 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-441eb957df4af4b736d0747570ce95c3c7156d2f2bb0a3b586a50796821676e6-rootfs.mount: Deactivated successfully. May 13 00:24:07.267269 kubelet[2539]: E0513 00:24:07.267237 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:07.269716 containerd[1466]: time="2025-05-13T00:24:07.269242640Z" level=info msg="CreateContainer within sandbox \"25c3fb99c7ff5a09884b6792c1a10d77e2bad552fa7b0813d8c3c5f1111900cc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:24:07.288367 containerd[1466]: time="2025-05-13T00:24:07.288251838Z" level=info msg="CreateContainer within sandbox \"25c3fb99c7ff5a09884b6792c1a10d77e2bad552fa7b0813d8c3c5f1111900cc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7c4e64c9fd080c6439a8d60cf2c68f32f1d706ab9c655db69bf55481122aac5c\"" May 13 00:24:07.288954 containerd[1466]: time="2025-05-13T00:24:07.288901155Z" level=info msg="StartContainer for \"7c4e64c9fd080c6439a8d60cf2c68f32f1d706ab9c655db69bf55481122aac5c\"" May 13 00:24:07.318423 systemd[1]: Started cri-containerd-7c4e64c9fd080c6439a8d60cf2c68f32f1d706ab9c655db69bf55481122aac5c.scope - libcontainer container 7c4e64c9fd080c6439a8d60cf2c68f32f1d706ab9c655db69bf55481122aac5c. May 13 00:24:07.342117 systemd[1]: cri-containerd-7c4e64c9fd080c6439a8d60cf2c68f32f1d706ab9c655db69bf55481122aac5c.scope: Deactivated successfully. May 13 00:24:07.344536 containerd[1466]: time="2025-05-13T00:24:07.344500648Z" level=info msg="StartContainer for \"7c4e64c9fd080c6439a8d60cf2c68f32f1d706ab9c655db69bf55481122aac5c\" returns successfully" May 13 00:24:07.367264 containerd[1466]: time="2025-05-13T00:24:07.367148523Z" level=info msg="shim disconnected" id=7c4e64c9fd080c6439a8d60cf2c68f32f1d706ab9c655db69bf55481122aac5c namespace=k8s.io May 13 00:24:07.367264 containerd[1466]: time="2025-05-13T00:24:07.367256107Z" level=warning msg="cleaning up after shim disconnected" id=7c4e64c9fd080c6439a8d60cf2c68f32f1d706ab9c655db69bf55481122aac5c namespace=k8s.io May 13 00:24:07.367264 containerd[1466]: time="2025-05-13T00:24:07.367265646Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:24:08.032106 kubelet[2539]: E0513 00:24:08.032062 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:08.108924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c4e64c9fd080c6439a8d60cf2c68f32f1d706ab9c655db69bf55481122aac5c-rootfs.mount: Deactivated successfully. May 13 00:24:08.193607 kubelet[2539]: I0513 00:24:08.193555 2539 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T00:24:08Z","lastTransitionTime":"2025-05-13T00:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 00:24:08.271151 kubelet[2539]: E0513 00:24:08.271105 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:08.273203 containerd[1466]: time="2025-05-13T00:24:08.272798509Z" level=info msg="CreateContainer within sandbox \"25c3fb99c7ff5a09884b6792c1a10d77e2bad552fa7b0813d8c3c5f1111900cc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:24:08.290470 containerd[1466]: time="2025-05-13T00:24:08.290332272Z" level=info msg="CreateContainer within sandbox \"25c3fb99c7ff5a09884b6792c1a10d77e2bad552fa7b0813d8c3c5f1111900cc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"566935d185cf76ce8052d0f3a64c4bed9a3377d4321a0ee6007d6bd0fbd58581\"" May 13 00:24:08.290917 containerd[1466]: time="2025-05-13T00:24:08.290893842Z" level=info msg="StartContainer for \"566935d185cf76ce8052d0f3a64c4bed9a3377d4321a0ee6007d6bd0fbd58581\"" May 13 00:24:08.323355 systemd[1]: Started cri-containerd-566935d185cf76ce8052d0f3a64c4bed9a3377d4321a0ee6007d6bd0fbd58581.scope - libcontainer container 566935d185cf76ce8052d0f3a64c4bed9a3377d4321a0ee6007d6bd0fbd58581. May 13 00:24:08.351929 containerd[1466]: time="2025-05-13T00:24:08.351881842Z" level=info msg="StartContainer for \"566935d185cf76ce8052d0f3a64c4bed9a3377d4321a0ee6007d6bd0fbd58581\" returns successfully" May 13 00:24:08.765230 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 13 00:24:09.275651 kubelet[2539]: E0513 00:24:09.275613 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:09.288875 kubelet[2539]: I0513 00:24:09.288791 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ff5td" podStartSLOduration=6.288774315 podStartE2EDuration="6.288774315s" podCreationTimestamp="2025-05-13 00:24:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:24:09.288608038 +0000 UTC m=+83.333997598" watchObservedRunningTime="2025-05-13 00:24:09.288774315 +0000 UTC m=+83.334163855" May 13 00:24:10.277578 kubelet[2539]: E0513 00:24:10.277537 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:10.416101 systemd[1]: run-containerd-runc-k8s.io-566935d185cf76ce8052d0f3a64c4bed9a3377d4321a0ee6007d6bd0fbd58581-runc.fq73LU.mount: Deactivated successfully. May 13 00:24:12.473123 systemd-networkd[1391]: lxc_health: Link UP May 13 00:24:12.482738 systemd-networkd[1391]: lxc_health: Gained carrier May 13 00:24:12.624798 systemd[1]: run-containerd-runc-k8s.io-566935d185cf76ce8052d0f3a64c4bed9a3377d4321a0ee6007d6bd0fbd58581-runc.JJylvY.mount: Deactivated successfully. May 13 00:24:14.184360 systemd-networkd[1391]: lxc_health: Gained IPv6LL May 13 00:24:14.207404 kubelet[2539]: E0513 00:24:14.207371 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:14.287298 kubelet[2539]: E0513 00:24:14.287263 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:14.798774 systemd[1]: run-containerd-runc-k8s.io-566935d185cf76ce8052d0f3a64c4bed9a3377d4321a0ee6007d6bd0fbd58581-runc.0CYISc.mount: Deactivated successfully. May 13 00:24:15.288699 kubelet[2539]: E0513 00:24:15.288657 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:16.033623 kubelet[2539]: E0513 00:24:16.032712 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:17.031757 kubelet[2539]: E0513 00:24:17.031726 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:19.074584 sshd[4374]: pam_unix(sshd:session): session closed for user core May 13 00:24:19.079302 systemd[1]: sshd@28-10.0.0.45:22-10.0.0.1:38978.service: Deactivated successfully. May 13 00:24:19.081413 systemd[1]: session-29.scope: Deactivated successfully. May 13 00:24:19.082152 systemd-logind[1452]: Session 29 logged out. Waiting for processes to exit. May 13 00:24:19.083241 systemd-logind[1452]: Removed session 29.