May 9 00:39:03.901537 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu May 8 22:52:37 -00 2025 May 9 00:39:03.901559 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56b660b06ded103a15fe25ebfbdecb898a20f374e429fec465c69b1a75d59c4b May 9 00:39:03.901570 kernel: BIOS-provided physical RAM map: May 9 00:39:03.901577 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 9 00:39:03.901583 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 9 00:39:03.901589 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 9 00:39:03.901596 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 9 00:39:03.901602 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 9 00:39:03.901608 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable May 9 00:39:03.901615 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 9 00:39:03.901624 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable May 9 00:39:03.901630 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved May 9 00:39:03.901636 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 May 9 00:39:03.901643 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved May 9 00:39:03.901650 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 9 00:39:03.901657 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 9 00:39:03.901666 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 9 00:39:03.901673 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 9 00:39:03.901680 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 9 00:39:03.901686 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 9 00:39:03.901693 kernel: NX (Execute Disable) protection: active May 9 00:39:03.901700 kernel: APIC: Static calls initialized May 9 00:39:03.901706 kernel: efi: EFI v2.7 by EDK II May 9 00:39:03.901713 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 May 9 00:39:03.901720 kernel: SMBIOS 2.8 present. May 9 00:39:03.901727 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 May 9 00:39:03.901744 kernel: Hypervisor detected: KVM May 9 00:39:03.901753 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 9 00:39:03.901760 kernel: kvm-clock: using sched offset of 4684786380 cycles May 9 00:39:03.901767 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 9 00:39:03.901774 kernel: tsc: Detected 2794.748 MHz processor May 9 00:39:03.901781 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 9 00:39:03.901789 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 9 00:39:03.901796 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 May 9 00:39:03.901802 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 9 00:39:03.901809 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 9 00:39:03.901818 kernel: Using GB pages for direct mapping May 9 00:39:03.901835 kernel: Secure boot disabled May 9 00:39:03.901851 kernel: ACPI: Early table checksum verification disabled May 9 00:39:03.901870 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 9 00:39:03.901895 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 9 00:39:03.901910 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:39:03.901918 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:39:03.901928 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 9 00:39:03.901935 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:39:03.901942 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:39:03.901950 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:39:03.901962 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:39:03.901970 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 9 00:39:03.901977 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 9 00:39:03.901987 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 9 00:39:03.901994 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 9 00:39:03.902001 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 9 00:39:03.902008 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 9 00:39:03.902015 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 9 00:39:03.902022 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 9 00:39:03.902029 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 9 00:39:03.902036 kernel: No NUMA configuration found May 9 00:39:03.902044 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] May 9 00:39:03.902051 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] May 9 00:39:03.902063 kernel: Zone ranges: May 9 00:39:03.902071 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 9 00:39:03.902078 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] May 9 00:39:03.902085 kernel: Normal empty May 9 00:39:03.902092 kernel: Movable zone start for each node May 9 00:39:03.902099 kernel: Early memory node ranges May 9 00:39:03.902106 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 9 00:39:03.902113 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 9 00:39:03.902120 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 9 00:39:03.902130 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] May 9 00:39:03.902136 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] May 9 00:39:03.902147 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] May 9 00:39:03.902155 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] May 9 00:39:03.902169 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 9 00:39:03.902182 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 9 00:39:03.902193 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 9 00:39:03.902203 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 9 00:39:03.902210 kernel: On node 0, zone DMA: 240 pages in unavailable ranges May 9 00:39:03.902220 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 9 00:39:03.902227 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges May 9 00:39:03.902237 kernel: ACPI: PM-Timer IO Port: 0x608 May 9 00:39:03.902244 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 9 00:39:03.902251 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 9 00:39:03.902258 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 9 00:39:03.902267 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 9 00:39:03.902274 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 9 00:39:03.902281 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 9 00:39:03.902291 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 9 00:39:03.902304 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 9 00:39:03.902315 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 9 00:39:03.902333 kernel: TSC deadline timer available May 9 00:39:03.902349 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 9 00:39:03.902377 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 9 00:39:03.902393 kernel: kvm-guest: KVM setup pv remote TLB flush May 9 00:39:03.902400 kernel: kvm-guest: setup PV sched yield May 9 00:39:03.902408 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 9 00:39:03.902415 kernel: Booting paravirtualized kernel on KVM May 9 00:39:03.902425 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 9 00:39:03.902432 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 9 00:39:03.902450 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 9 00:39:03.902461 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 9 00:39:03.902468 kernel: pcpu-alloc: [0] 0 1 2 3 May 9 00:39:03.902475 kernel: kvm-guest: PV spinlocks enabled May 9 00:39:03.902482 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 9 00:39:03.902491 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56b660b06ded103a15fe25ebfbdecb898a20f374e429fec465c69b1a75d59c4b May 9 00:39:03.902507 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 00:39:03.902520 kernel: random: crng init done May 9 00:39:03.902536 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 00:39:03.902549 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 00:39:03.902556 kernel: Fallback order for Node 0: 0 May 9 00:39:03.902564 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 May 9 00:39:03.902570 kernel: Policy zone: DMA32 May 9 00:39:03.902587 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 00:39:03.902600 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 166140K reserved, 0K cma-reserved) May 9 00:39:03.902619 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 9 00:39:03.902626 kernel: ftrace: allocating 37944 entries in 149 pages May 9 00:39:03.902633 kernel: ftrace: allocated 149 pages with 4 groups May 9 00:39:03.902641 kernel: Dynamic Preempt: voluntary May 9 00:39:03.902666 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 00:39:03.902683 kernel: rcu: RCU event tracing is enabled. May 9 00:39:03.902691 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 9 00:39:03.902698 kernel: Trampoline variant of Tasks RCU enabled. May 9 00:39:03.902706 kernel: Rude variant of Tasks RCU enabled. May 9 00:39:03.902713 kernel: Tracing variant of Tasks RCU enabled. May 9 00:39:03.902721 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 00:39:03.902751 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 9 00:39:03.902773 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 9 00:39:03.902781 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 00:39:03.902789 kernel: Console: colour dummy device 80x25 May 9 00:39:03.902796 kernel: printk: console [ttyS0] enabled May 9 00:39:03.902803 kernel: ACPI: Core revision 20230628 May 9 00:39:03.902814 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 9 00:39:03.902821 kernel: APIC: Switch to symmetric I/O mode setup May 9 00:39:03.902829 kernel: x2apic enabled May 9 00:39:03.902836 kernel: APIC: Switched APIC routing to: physical x2apic May 9 00:39:03.902844 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 9 00:39:03.902851 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 9 00:39:03.902859 kernel: kvm-guest: setup PV IPIs May 9 00:39:03.902870 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 9 00:39:03.902884 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 9 00:39:03.902901 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 9 00:39:03.902917 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 9 00:39:03.902925 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 9 00:39:03.902933 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 9 00:39:03.902940 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 9 00:39:03.902950 kernel: Spectre V2 : Mitigation: Retpolines May 9 00:39:03.902958 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 9 00:39:03.902965 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 9 00:39:03.902975 kernel: RETBleed: Mitigation: untrained return thunk May 9 00:39:03.902983 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 9 00:39:03.902990 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 9 00:39:03.902998 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 9 00:39:03.903006 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 9 00:39:03.903014 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 9 00:39:03.903021 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 9 00:39:03.903029 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 9 00:39:03.903036 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 9 00:39:03.903046 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 9 00:39:03.903054 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 9 00:39:03.903062 kernel: Freeing SMP alternatives memory: 32K May 9 00:39:03.903069 kernel: pid_max: default: 32768 minimum: 301 May 9 00:39:03.903076 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 00:39:03.903084 kernel: landlock: Up and running. May 9 00:39:03.903091 kernel: SELinux: Initializing. May 9 00:39:03.903099 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 00:39:03.903106 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 00:39:03.903117 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 9 00:39:03.903124 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:39:03.903132 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:39:03.903139 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:39:03.903147 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 9 00:39:03.903154 kernel: ... version: 0 May 9 00:39:03.903161 kernel: ... bit width: 48 May 9 00:39:03.903169 kernel: ... generic registers: 6 May 9 00:39:03.903176 kernel: ... value mask: 0000ffffffffffff May 9 00:39:03.903186 kernel: ... max period: 00007fffffffffff May 9 00:39:03.903193 kernel: ... fixed-purpose events: 0 May 9 00:39:03.903201 kernel: ... event mask: 000000000000003f May 9 00:39:03.903208 kernel: signal: max sigframe size: 1776 May 9 00:39:03.903216 kernel: rcu: Hierarchical SRCU implementation. May 9 00:39:03.903226 kernel: rcu: Max phase no-delay instances is 400. May 9 00:39:03.903235 kernel: smp: Bringing up secondary CPUs ... May 9 00:39:03.903244 kernel: smpboot: x86: Booting SMP configuration: May 9 00:39:03.903254 kernel: .... node #0, CPUs: #1 #2 #3 May 9 00:39:03.903266 kernel: smp: Brought up 1 node, 4 CPUs May 9 00:39:03.903276 kernel: smpboot: Max logical packages: 1 May 9 00:39:03.903286 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 9 00:39:03.903296 kernel: devtmpfs: initialized May 9 00:39:03.903306 kernel: x86/mm: Memory block size: 128MB May 9 00:39:03.903316 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 9 00:39:03.903326 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 9 00:39:03.903336 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) May 9 00:39:03.903346 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 9 00:39:03.903360 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 9 00:39:03.903380 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 00:39:03.903390 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 9 00:39:03.903400 kernel: pinctrl core: initialized pinctrl subsystem May 9 00:39:03.903410 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 00:39:03.903421 kernel: audit: initializing netlink subsys (disabled) May 9 00:39:03.903431 kernel: audit: type=2000 audit(1746751142.879:1): state=initialized audit_enabled=0 res=1 May 9 00:39:03.903441 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 00:39:03.903457 kernel: thermal_sys: Registered thermal governor 'user_space' May 9 00:39:03.903472 kernel: cpuidle: using governor menu May 9 00:39:03.903482 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 00:39:03.903492 kernel: dca service started, version 1.12.1 May 9 00:39:03.903499 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 9 00:39:03.903507 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 9 00:39:03.903514 kernel: PCI: Using configuration type 1 for base access May 9 00:39:03.903522 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 9 00:39:03.903539 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 00:39:03.903551 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 9 00:39:03.903558 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 00:39:03.903566 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 9 00:39:03.903573 kernel: ACPI: Added _OSI(Module Device) May 9 00:39:03.903581 kernel: ACPI: Added _OSI(Processor Device) May 9 00:39:03.903598 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 00:39:03.903605 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 00:39:03.903621 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 00:39:03.903635 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 9 00:39:03.903658 kernel: ACPI: Interpreter enabled May 9 00:39:03.903676 kernel: ACPI: PM: (supports S0 S3 S5) May 9 00:39:03.903685 kernel: ACPI: Using IOAPIC for interrupt routing May 9 00:39:03.903700 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 9 00:39:03.903722 kernel: PCI: Using E820 reservations for host bridge windows May 9 00:39:03.903742 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 9 00:39:03.903758 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 9 00:39:03.903984 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 9 00:39:03.904120 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 9 00:39:03.904288 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 9 00:39:03.904300 kernel: PCI host bridge to bus 0000:00 May 9 00:39:03.904473 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 9 00:39:03.904600 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 9 00:39:03.904762 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 9 00:39:03.904878 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 9 00:39:03.905022 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 9 00:39:03.905175 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] May 9 00:39:03.905291 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 9 00:39:03.905453 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 9 00:39:03.905617 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 9 00:39:03.905893 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 9 00:39:03.906072 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 9 00:39:03.906245 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 9 00:39:03.906402 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 9 00:39:03.906557 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 9 00:39:03.906726 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 9 00:39:03.906881 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 9 00:39:03.907024 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 9 00:39:03.907228 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] May 9 00:39:03.907413 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 9 00:39:03.907546 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 9 00:39:03.907687 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 9 00:39:03.908082 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] May 9 00:39:03.908268 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 9 00:39:03.908431 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 9 00:39:03.908569 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 9 00:39:03.908702 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] May 9 00:39:03.908849 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 9 00:39:03.909069 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 9 00:39:03.909214 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 9 00:39:03.909384 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 9 00:39:03.909515 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 9 00:39:03.909700 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 9 00:39:03.909861 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 9 00:39:03.909998 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 9 00:39:03.910011 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 9 00:39:03.910019 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 9 00:39:03.910027 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 9 00:39:03.910035 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 9 00:39:03.910043 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 9 00:39:03.910056 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 9 00:39:03.910064 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 9 00:39:03.910072 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 9 00:39:03.910080 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 9 00:39:03.910088 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 9 00:39:03.910098 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 9 00:39:03.910106 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 9 00:39:03.910114 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 9 00:39:03.910125 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 9 00:39:03.910144 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 9 00:39:03.910160 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 9 00:39:03.910178 kernel: iommu: Default domain type: Translated May 9 00:39:03.910192 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 9 00:39:03.910210 kernel: efivars: Registered efivars operations May 9 00:39:03.910226 kernel: PCI: Using ACPI for IRQ routing May 9 00:39:03.910239 kernel: PCI: pci_cache_line_size set to 64 bytes May 9 00:39:03.910248 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 9 00:39:03.910255 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] May 9 00:39:03.910266 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] May 9 00:39:03.910273 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] May 9 00:39:03.910413 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 9 00:39:03.910577 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 9 00:39:03.910701 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 9 00:39:03.911903 kernel: vgaarb: loaded May 9 00:39:03.911917 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 9 00:39:03.911925 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 9 00:39:03.911937 kernel: clocksource: Switched to clocksource kvm-clock May 9 00:39:03.911945 kernel: VFS: Disk quotas dquot_6.6.0 May 9 00:39:03.911954 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 00:39:03.911961 kernel: pnp: PnP ACPI init May 9 00:39:03.912124 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 9 00:39:03.912141 kernel: pnp: PnP ACPI: found 6 devices May 9 00:39:03.912159 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 9 00:39:03.912168 kernel: NET: Registered PF_INET protocol family May 9 00:39:03.912181 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 00:39:03.912202 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 00:39:03.912213 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 00:39:03.912231 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 00:39:03.912240 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 00:39:03.912247 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 00:39:03.912266 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 00:39:03.912277 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 00:39:03.912289 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 00:39:03.912300 kernel: NET: Registered PF_XDP protocol family May 9 00:39:03.912482 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 9 00:39:03.912619 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 9 00:39:03.912832 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 9 00:39:03.912992 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 9 00:39:03.913153 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 9 00:39:03.913294 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 9 00:39:03.913437 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 9 00:39:03.913562 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] May 9 00:39:03.913574 kernel: PCI: CLS 0 bytes, default 64 May 9 00:39:03.913582 kernel: Initialise system trusted keyrings May 9 00:39:03.913590 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 00:39:03.913598 kernel: Key type asymmetric registered May 9 00:39:03.913605 kernel: Asymmetric key parser 'x509' registered May 9 00:39:03.913613 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 9 00:39:03.913620 kernel: io scheduler mq-deadline registered May 9 00:39:03.913629 kernel: io scheduler kyber registered May 9 00:39:03.913640 kernel: io scheduler bfq registered May 9 00:39:03.913648 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 9 00:39:03.913656 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 9 00:39:03.913664 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 9 00:39:03.913672 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 9 00:39:03.913679 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 00:39:03.913688 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 9 00:39:03.913696 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 9 00:39:03.913703 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 9 00:39:03.913714 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 9 00:39:03.913914 kernel: rtc_cmos 00:04: RTC can wake from S4 May 9 00:39:03.913939 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 9 00:39:03.914095 kernel: rtc_cmos 00:04: registered as rtc0 May 9 00:39:03.914267 kernel: rtc_cmos 00:04: setting system clock to 2025-05-09T00:39:03 UTC (1746751143) May 9 00:39:03.914428 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 9 00:39:03.914439 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 9 00:39:03.914447 kernel: efifb: probing for efifb May 9 00:39:03.914461 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k May 9 00:39:03.914469 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 May 9 00:39:03.914476 kernel: efifb: scrolling: redraw May 9 00:39:03.914484 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 May 9 00:39:03.914492 kernel: Console: switching to colour frame buffer device 100x37 May 9 00:39:03.914500 kernel: fb0: EFI VGA frame buffer device May 9 00:39:03.914529 kernel: pstore: Using crash dump compression: deflate May 9 00:39:03.914539 kernel: pstore: Registered efi_pstore as persistent store backend May 9 00:39:03.914547 kernel: NET: Registered PF_INET6 protocol family May 9 00:39:03.914557 kernel: Segment Routing with IPv6 May 9 00:39:03.914565 kernel: In-situ OAM (IOAM) with IPv6 May 9 00:39:03.914577 kernel: NET: Registered PF_PACKET protocol family May 9 00:39:03.914589 kernel: Key type dns_resolver registered May 9 00:39:03.914604 kernel: IPI shorthand broadcast: enabled May 9 00:39:03.914622 kernel: sched_clock: Marking stable (675002897, 116777675)->(808210451, -16429879) May 9 00:39:03.914636 kernel: registered taskstats version 1 May 9 00:39:03.914652 kernel: Loading compiled-in X.509 certificates May 9 00:39:03.914669 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: fe5c896a3ca06bb89ebdfb7ed85f611806e4c1cc' May 9 00:39:03.914679 kernel: Key type .fscrypt registered May 9 00:39:03.914688 kernel: Key type fscrypt-provisioning registered May 9 00:39:03.914704 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 00:39:03.914718 kernel: ima: Allocated hash algorithm: sha1 May 9 00:39:03.914741 kernel: ima: No architecture policies found May 9 00:39:03.914749 kernel: clk: Disabling unused clocks May 9 00:39:03.914757 kernel: Freeing unused kernel image (initmem) memory: 42864K May 9 00:39:03.914765 kernel: Write protecting the kernel read-only data: 36864k May 9 00:39:03.914774 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 9 00:39:03.914785 kernel: Run /init as init process May 9 00:39:03.914792 kernel: with arguments: May 9 00:39:03.914800 kernel: /init May 9 00:39:03.914808 kernel: with environment: May 9 00:39:03.914816 kernel: HOME=/ May 9 00:39:03.914823 kernel: TERM=linux May 9 00:39:03.914831 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 00:39:03.914842 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:39:03.914856 systemd[1]: Detected virtualization kvm. May 9 00:39:03.914873 systemd[1]: Detected architecture x86-64. May 9 00:39:03.914887 systemd[1]: Running in initrd. May 9 00:39:03.914904 systemd[1]: No hostname configured, using default hostname. May 9 00:39:03.914921 systemd[1]: Hostname set to . May 9 00:39:03.914938 systemd[1]: Initializing machine ID from VM UUID. May 9 00:39:03.914950 systemd[1]: Queued start job for default target initrd.target. May 9 00:39:03.914965 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:39:03.914980 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:39:03.914989 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 00:39:03.915003 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:39:03.915018 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 00:39:03.915032 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 00:39:03.915046 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 00:39:03.915055 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 00:39:03.915063 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:39:03.915072 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:39:03.915080 systemd[1]: Reached target paths.target - Path Units. May 9 00:39:03.915089 systemd[1]: Reached target slices.target - Slice Units. May 9 00:39:03.915105 systemd[1]: Reached target swap.target - Swaps. May 9 00:39:03.915122 systemd[1]: Reached target timers.target - Timer Units. May 9 00:39:03.915140 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:39:03.915157 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:39:03.915168 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 00:39:03.915177 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 00:39:03.915185 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:39:03.915194 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:39:03.915203 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:39:03.915214 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:39:03.915222 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 00:39:03.915231 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:39:03.915239 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 00:39:03.915248 systemd[1]: Starting systemd-fsck-usr.service... May 9 00:39:03.915256 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:39:03.915265 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:39:03.915273 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:39:03.915281 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 00:39:03.915298 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:39:03.915306 systemd[1]: Finished systemd-fsck-usr.service. May 9 00:39:03.915345 systemd-journald[192]: Collecting audit messages is disabled. May 9 00:39:03.915378 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:39:03.915388 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:39:03.915396 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:39:03.915406 systemd-journald[192]: Journal started May 9 00:39:03.915427 systemd-journald[192]: Runtime Journal (/run/log/journal/00caab5558b141b2a622049a8f121d05) is 6.0M, max 48.3M, 42.2M free. May 9 00:39:03.906325 systemd-modules-load[193]: Inserted module 'overlay' May 9 00:39:03.919793 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:39:03.919158 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:39:03.920869 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:39:03.922955 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:39:03.937765 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 00:39:03.939745 kernel: Bridge firewalling registered May 9 00:39:03.939718 systemd-modules-load[193]: Inserted module 'br_netfilter' May 9 00:39:03.941304 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:39:03.943853 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:39:03.950569 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 00:39:03.953745 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:39:03.955115 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:39:03.958006 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:39:03.975495 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:39:03.977948 dracut-cmdline[221]: dracut-dracut-053 May 9 00:39:03.979926 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56b660b06ded103a15fe25ebfbdecb898a20f374e429fec465c69b1a75d59c4b May 9 00:39:03.986597 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:39:04.017359 systemd-resolved[237]: Positive Trust Anchors: May 9 00:39:04.017386 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:39:04.017419 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:39:04.020087 systemd-resolved[237]: Defaulting to hostname 'linux'. May 9 00:39:04.021211 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:39:04.027876 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:39:04.083781 kernel: SCSI subsystem initialized May 9 00:39:04.092749 kernel: Loading iSCSI transport class v2.0-870. May 9 00:39:04.116761 kernel: iscsi: registered transport (tcp) May 9 00:39:04.138756 kernel: iscsi: registered transport (qla4xxx) May 9 00:39:04.138784 kernel: QLogic iSCSI HBA Driver May 9 00:39:04.191675 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 00:39:04.205050 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 00:39:04.229167 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 00:39:04.229243 kernel: device-mapper: uevent: version 1.0.3 May 9 00:39:04.230233 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 00:39:04.272783 kernel: raid6: avx2x4 gen() 29808 MB/s May 9 00:39:04.289762 kernel: raid6: avx2x2 gen() 30747 MB/s May 9 00:39:04.306935 kernel: raid6: avx2x1 gen() 25614 MB/s May 9 00:39:04.306994 kernel: raid6: using algorithm avx2x2 gen() 30747 MB/s May 9 00:39:04.325145 kernel: raid6: .... xor() 19216 MB/s, rmw enabled May 9 00:39:04.325231 kernel: raid6: using avx2x2 recovery algorithm May 9 00:39:04.345766 kernel: xor: automatically using best checksumming function avx May 9 00:39:04.501769 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 00:39:04.515570 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 00:39:04.522012 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:39:04.535868 systemd-udevd[412]: Using default interface naming scheme 'v255'. May 9 00:39:04.540522 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:39:04.555950 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 00:39:04.570804 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation May 9 00:39:04.605409 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:39:04.618936 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:39:04.687243 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:39:04.694902 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 00:39:04.721972 kernel: cryptd: max_cpu_qlen set to 1000 May 9 00:39:04.718333 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 00:39:04.720161 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:39:04.724761 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:39:04.729519 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 9 00:39:04.731545 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 9 00:39:04.728328 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:39:04.739792 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 00:39:04.752880 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 00:39:04.752927 kernel: GPT:9289727 != 19775487 May 9 00:39:04.752942 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 00:39:04.752956 kernel: GPT:9289727 != 19775487 May 9 00:39:04.752969 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 00:39:04.752984 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:39:04.759087 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:39:04.759243 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:39:04.769979 kernel: AVX2 version of gcm_enc/dec engaged. May 9 00:39:04.762035 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:39:04.766461 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:39:04.774086 kernel: libata version 3.00 loaded. May 9 00:39:04.766775 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:39:04.780400 kernel: AES CTR mode by8 optimization enabled May 9 00:39:04.768546 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:39:04.774980 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:39:04.776607 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 00:39:04.789971 kernel: ahci 0000:00:1f.2: version 3.0 May 9 00:39:04.791463 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 9 00:39:04.794760 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 9 00:39:04.795030 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 9 00:39:04.798788 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (466) May 9 00:39:04.802981 kernel: BTRFS: device fsid 8d57db23-a0fc-4362-9769-38fbda5747c1 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (457) May 9 00:39:04.804783 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 9 00:39:04.807954 kernel: scsi host0: ahci May 9 00:39:04.808186 kernel: scsi host1: ahci May 9 00:39:04.809040 kernel: scsi host2: ahci May 9 00:39:04.810748 kernel: scsi host3: ahci May 9 00:39:04.811752 kernel: scsi host4: ahci May 9 00:39:04.813829 kernel: scsi host5: ahci May 9 00:39:04.814066 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 9 00:39:04.814079 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 9 00:39:04.814756 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 9 00:39:04.816694 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 9 00:39:04.816742 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 9 00:39:04.818749 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 9 00:39:04.819841 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 9 00:39:04.835027 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 9 00:39:04.838489 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 9 00:39:04.846062 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 00:39:04.860972 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 00:39:04.863505 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:39:04.863574 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:39:04.867036 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:39:04.869949 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:39:04.885612 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:39:04.888743 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:39:04.920260 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:39:05.036899 disk-uuid[556]: Primary Header is updated. May 9 00:39:05.036899 disk-uuid[556]: Secondary Entries is updated. May 9 00:39:05.036899 disk-uuid[556]: Secondary Header is updated. May 9 00:39:05.041750 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:39:05.045745 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:39:05.128761 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 9 00:39:05.128825 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 9 00:39:05.129795 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 9 00:39:05.130980 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 9 00:39:05.131761 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 9 00:39:05.131786 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 9 00:39:05.132899 kernel: ata3.00: applying bridge limits May 9 00:39:05.133757 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 9 00:39:05.145776 kernel: ata3.00: configured for UDMA/100 May 9 00:39:05.145802 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 9 00:39:05.200761 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 9 00:39:05.200968 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 9 00:39:05.222757 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 9 00:39:06.047767 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:39:06.048077 disk-uuid[571]: The operation has completed successfully. May 9 00:39:06.076944 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 00:39:06.077086 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 00:39:06.104929 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 00:39:06.108185 sh[595]: Success May 9 00:39:06.120792 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 9 00:39:06.154954 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 00:39:06.167255 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 00:39:06.170641 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 00:39:06.182779 kernel: BTRFS info (device dm-0): first mount of filesystem 8d57db23-a0fc-4362-9769-38fbda5747c1 May 9 00:39:06.182830 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 9 00:39:06.182841 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 00:39:06.183803 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 00:39:06.185166 kernel: BTRFS info (device dm-0): using free space tree May 9 00:39:06.189402 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 00:39:06.191810 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 00:39:06.200870 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 00:39:06.202444 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 00:39:06.211759 kernel: BTRFS info (device vda6): first mount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:39:06.211787 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:39:06.213382 kernel: BTRFS info (device vda6): using free space tree May 9 00:39:06.215784 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:39:06.225027 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 00:39:06.226696 kernel: BTRFS info (device vda6): last unmount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:39:06.235953 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 00:39:06.241862 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 00:39:06.296263 ignition[687]: Ignition 2.19.0 May 9 00:39:06.296276 ignition[687]: Stage: fetch-offline May 9 00:39:06.296325 ignition[687]: no configs at "/usr/lib/ignition/base.d" May 9 00:39:06.296337 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:39:06.296445 ignition[687]: parsed url from cmdline: "" May 9 00:39:06.296450 ignition[687]: no config URL provided May 9 00:39:06.296456 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" May 9 00:39:06.296466 ignition[687]: no config at "/usr/lib/ignition/user.ign" May 9 00:39:06.296502 ignition[687]: op(1): [started] loading QEMU firmware config module May 9 00:39:06.296508 ignition[687]: op(1): executing: "modprobe" "qemu_fw_cfg" May 9 00:39:06.304863 ignition[687]: op(1): [finished] loading QEMU firmware config module May 9 00:39:06.321675 ignition[687]: parsing config with SHA512: 0e93fb272ad4db24463a88667b4e05822de5c96f61aa7771cc8ab158c62ff73e940b7257e4519afeb0aae8717d395f3bf69ddf5d7071fb9e52e53da2a060aa0d May 9 00:39:06.326120 unknown[687]: fetched base config from "system" May 9 00:39:06.326128 unknown[687]: fetched user config from "qemu" May 9 00:39:06.326986 ignition[687]: fetch-offline: fetch-offline passed May 9 00:39:06.326795 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:39:06.327092 ignition[687]: Ignition finished successfully May 9 00:39:06.329066 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:39:06.341891 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:39:06.364676 systemd-networkd[784]: lo: Link UP May 9 00:39:06.364688 systemd-networkd[784]: lo: Gained carrier May 9 00:39:06.366241 systemd-networkd[784]: Enumeration completed May 9 00:39:06.366344 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:39:06.366642 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:39:06.366646 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:39:06.367724 systemd-networkd[784]: eth0: Link UP May 9 00:39:06.367741 systemd-networkd[784]: eth0: Gained carrier May 9 00:39:06.367749 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:39:06.368116 systemd[1]: Reached target network.target - Network. May 9 00:39:06.369748 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 9 00:39:06.381888 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 00:39:06.392801 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:39:06.397550 ignition[786]: Ignition 2.19.0 May 9 00:39:06.397561 ignition[786]: Stage: kargs May 9 00:39:06.397713 ignition[786]: no configs at "/usr/lib/ignition/base.d" May 9 00:39:06.397724 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:39:06.401478 ignition[786]: kargs: kargs passed May 9 00:39:06.401523 ignition[786]: Ignition finished successfully May 9 00:39:06.405765 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 00:39:06.417899 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 00:39:06.429674 ignition[794]: Ignition 2.19.0 May 9 00:39:06.429685 ignition[794]: Stage: disks May 9 00:39:06.429867 ignition[794]: no configs at "/usr/lib/ignition/base.d" May 9 00:39:06.429881 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:39:06.430706 ignition[794]: disks: disks passed May 9 00:39:06.430761 ignition[794]: Ignition finished successfully May 9 00:39:06.436896 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 00:39:06.439067 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 00:39:06.439141 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 00:39:06.442635 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:39:06.444778 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:39:06.446715 systemd[1]: Reached target basic.target - Basic System. May 9 00:39:06.461872 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 00:39:06.475546 systemd-resolved[237]: Detected conflict on linux IN A 10.0.0.133 May 9 00:39:06.475562 systemd-resolved[237]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. May 9 00:39:06.477610 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 9 00:39:06.484882 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 00:39:06.499864 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 00:39:06.590757 kernel: EXT4-fs (vda9): mounted filesystem 4cb03022-f5a4-4664-b5b4-bc39fcc2f946 r/w with ordered data mode. Quota mode: none. May 9 00:39:06.591191 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 00:39:06.591883 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 00:39:06.605831 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:39:06.607623 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 00:39:06.609357 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 00:39:06.618661 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (814) May 9 00:39:06.618687 kernel: BTRFS info (device vda6): first mount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:39:06.618699 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:39:06.618710 kernel: BTRFS info (device vda6): using free space tree May 9 00:39:06.609393 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 00:39:06.609414 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:39:06.616391 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 00:39:06.620366 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 00:39:06.628752 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:39:06.631186 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:39:06.663128 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory May 9 00:39:06.669223 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory May 9 00:39:06.673818 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory May 9 00:39:06.679293 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory May 9 00:39:06.765334 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 00:39:06.772843 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 00:39:06.774546 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 00:39:06.781757 kernel: BTRFS info (device vda6): last unmount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:39:06.800708 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 00:39:06.803046 ignition[930]: INFO : Ignition 2.19.0 May 9 00:39:06.803046 ignition[930]: INFO : Stage: mount May 9 00:39:06.803046 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:39:06.803046 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:39:06.803046 ignition[930]: INFO : mount: mount passed May 9 00:39:06.803046 ignition[930]: INFO : Ignition finished successfully May 9 00:39:06.804975 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 00:39:06.813842 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 00:39:07.182277 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 00:39:07.195049 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:39:07.202143 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (942) May 9 00:39:07.202176 kernel: BTRFS info (device vda6): first mount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:39:07.202191 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:39:07.203766 kernel: BTRFS info (device vda6): using free space tree May 9 00:39:07.206768 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:39:07.207799 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:39:07.234677 ignition[959]: INFO : Ignition 2.19.0 May 9 00:39:07.234677 ignition[959]: INFO : Stage: files May 9 00:39:07.236623 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:39:07.236623 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:39:07.239241 ignition[959]: DEBUG : files: compiled without relabeling support, skipping May 9 00:39:07.240537 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 00:39:07.240537 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 00:39:07.244760 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 00:39:07.246334 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 00:39:07.248138 unknown[959]: wrote ssh authorized keys file for user: core May 9 00:39:07.249457 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 00:39:07.251690 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 9 00:39:07.253600 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 9 00:39:07.255534 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 9 00:39:07.257683 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 9 00:39:07.337208 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 9 00:39:07.530006 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 9 00:39:07.530006 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 9 00:39:07.533979 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 9 00:39:07.533979 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 00:39:07.533979 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 00:39:07.533979 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 00:39:07.533979 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 00:39:07.533979 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 00:39:07.533979 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 00:39:07.533979 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:39:07.533979 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:39:07.533979 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 00:39:07.533979 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 00:39:07.533979 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 00:39:07.533979 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 9 00:39:08.008318 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 9 00:39:08.259991 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 00:39:08.259991 ignition[959]: INFO : files: op(c): [started] processing unit "containerd.service" May 9 00:39:08.264080 ignition[959]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 9 00:39:08.264080 ignition[959]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 9 00:39:08.264080 ignition[959]: INFO : files: op(c): [finished] processing unit "containerd.service" May 9 00:39:08.264080 ignition[959]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 9 00:39:08.264080 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 00:39:08.264080 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 00:39:08.264080 ignition[959]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 9 00:39:08.264080 ignition[959]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" May 9 00:39:08.264080 ignition[959]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 00:39:08.264080 ignition[959]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 00:39:08.264080 ignition[959]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" May 9 00:39:08.264080 ignition[959]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" May 9 00:39:08.290946 ignition[959]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" May 9 00:39:08.296268 ignition[959]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 9 00:39:08.297940 ignition[959]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" May 9 00:39:08.297940 ignition[959]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" May 9 00:39:08.297940 ignition[959]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" May 9 00:39:08.297940 ignition[959]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 00:39:08.297940 ignition[959]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 00:39:08.297940 ignition[959]: INFO : files: files passed May 9 00:39:08.297940 ignition[959]: INFO : Ignition finished successfully May 9 00:39:08.299430 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 00:39:08.311884 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 00:39:08.313882 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 00:39:08.315579 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 00:39:08.315686 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 00:39:08.324080 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory May 9 00:39:08.326831 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:39:08.328552 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 00:39:08.331332 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:39:08.329795 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:39:08.331512 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 00:39:08.340907 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 00:39:08.365841 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 00:39:08.365978 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 00:39:08.367148 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 00:39:08.369325 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 00:39:08.369935 systemd-networkd[784]: eth0: Gained IPv6LL May 9 00:39:08.372644 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 00:39:08.379886 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 00:39:08.393294 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:39:08.400945 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 00:39:08.410415 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 00:39:08.411789 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:39:08.414074 systemd[1]: Stopped target timers.target - Timer Units. May 9 00:39:08.416154 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 00:39:08.416279 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:39:08.418685 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 00:39:08.420302 systemd[1]: Stopped target basic.target - Basic System. May 9 00:39:08.422354 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 00:39:08.424434 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:39:08.426482 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 00:39:08.428689 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 00:39:08.430869 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:39:08.433171 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 00:39:08.435275 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 00:39:08.437462 systemd[1]: Stopped target swap.target - Swaps. May 9 00:39:08.439276 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 00:39:08.439388 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 00:39:08.441782 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 00:39:08.443251 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:39:08.445398 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 00:39:08.445550 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:39:08.447662 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 00:39:08.447806 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 00:39:08.450232 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 00:39:08.450356 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:39:08.452222 systemd[1]: Stopped target paths.target - Path Units. May 9 00:39:08.454006 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 00:39:08.454125 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:39:08.456633 systemd[1]: Stopped target slices.target - Slice Units. May 9 00:39:08.458505 systemd[1]: Stopped target sockets.target - Socket Units. May 9 00:39:08.460488 systemd[1]: iscsid.socket: Deactivated successfully. May 9 00:39:08.460581 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:39:08.462524 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 00:39:08.462611 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:39:08.464674 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 00:39:08.464795 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:39:08.466754 systemd[1]: ignition-files.service: Deactivated successfully. May 9 00:39:08.466853 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 00:39:08.481917 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 00:39:08.483749 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 00:39:08.484777 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 00:39:08.484930 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:39:08.487142 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 00:39:08.487427 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:39:08.495363 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 00:39:08.495548 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 00:39:08.499010 ignition[1015]: INFO : Ignition 2.19.0 May 9 00:39:08.499010 ignition[1015]: INFO : Stage: umount May 9 00:39:08.499010 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:39:08.499010 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:39:08.499010 ignition[1015]: INFO : umount: umount passed May 9 00:39:08.499010 ignition[1015]: INFO : Ignition finished successfully May 9 00:39:08.500163 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 00:39:08.500293 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 00:39:08.501986 systemd[1]: Stopped target network.target - Network. May 9 00:39:08.503686 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 00:39:08.503772 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 00:39:08.505943 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 00:39:08.505991 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 00:39:08.507849 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 00:39:08.507894 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 00:39:08.509714 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 00:39:08.509782 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 00:39:08.512208 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 00:39:08.514244 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 00:39:08.517272 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 00:39:08.517765 systemd-networkd[784]: eth0: DHCPv6 lease lost May 9 00:39:08.520110 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 00:39:08.520280 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 00:39:08.522798 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 00:39:08.522873 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 00:39:08.531907 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 00:39:08.533077 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 00:39:08.533159 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:39:08.535705 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:39:08.538504 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 00:39:08.538668 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 00:39:08.551187 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 00:39:08.551308 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 00:39:08.554688 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 00:39:08.555815 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 00:39:08.558204 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 00:39:08.559285 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:39:08.562334 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 00:39:08.563446 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:39:08.566476 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 00:39:08.567594 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 00:39:08.570789 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 00:39:08.572066 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 00:39:08.574346 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 00:39:08.574403 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:39:08.577739 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 00:39:08.578707 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 00:39:08.580991 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 00:39:08.582014 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 00:39:08.584278 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:39:08.585293 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:39:08.597990 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 00:39:08.600333 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 00:39:08.600410 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:39:08.603967 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 9 00:39:08.605126 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:39:08.607924 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 00:39:08.607983 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:39:08.611448 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:39:08.612477 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:39:08.615439 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 00:39:08.616676 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 00:39:08.704552 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 00:39:08.705824 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 00:39:08.708339 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 00:39:08.710740 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 00:39:08.711726 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 00:39:08.725868 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 00:39:08.735022 systemd[1]: Switching root. May 9 00:39:08.770371 systemd-journald[192]: Journal stopped May 9 00:39:10.032643 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). May 9 00:39:10.033288 kernel: SELinux: policy capability network_peer_controls=1 May 9 00:39:10.033354 kernel: SELinux: policy capability open_perms=1 May 9 00:39:10.033986 kernel: SELinux: policy capability extended_socket_class=1 May 9 00:39:10.034004 kernel: SELinux: policy capability always_check_network=0 May 9 00:39:10.034020 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 00:39:10.034037 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 00:39:10.034052 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 00:39:10.034075 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 00:39:10.034096 kernel: audit: type=1403 audit(1746751149.215:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 00:39:10.034117 systemd[1]: Successfully loaded SELinux policy in 45.405ms. May 9 00:39:10.034141 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.315ms. May 9 00:39:10.034159 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:39:10.034176 systemd[1]: Detected virtualization kvm. May 9 00:39:10.034199 systemd[1]: Detected architecture x86-64. May 9 00:39:10.034215 systemd[1]: Detected first boot. May 9 00:39:10.034278 systemd[1]: Initializing machine ID from VM UUID. May 9 00:39:10.034297 zram_generator::config[1076]: No configuration found. May 9 00:39:10.034319 systemd[1]: Populated /etc with preset unit settings. May 9 00:39:10.034335 systemd[1]: Queued start job for default target multi-user.target. May 9 00:39:10.034351 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 9 00:39:10.034369 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 00:39:10.034386 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 00:39:10.034402 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 00:39:10.034418 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 00:39:10.034435 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 00:39:10.034455 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 00:39:10.034472 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 00:39:10.034489 systemd[1]: Created slice user.slice - User and Session Slice. May 9 00:39:10.034506 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:39:10.034523 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:39:10.034545 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 00:39:10.034561 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 00:39:10.034578 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 00:39:10.034595 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:39:10.034617 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 9 00:39:10.034634 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:39:10.034650 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 00:39:10.034666 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:39:10.034682 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:39:10.034699 systemd[1]: Reached target slices.target - Slice Units. May 9 00:39:10.034715 systemd[1]: Reached target swap.target - Swaps. May 9 00:39:10.034744 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 00:39:10.034767 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 00:39:10.034783 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 00:39:10.034810 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 00:39:10.034840 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:39:10.034871 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:39:10.034902 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:39:10.034927 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 00:39:10.034943 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 00:39:10.034959 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 00:39:10.034975 systemd[1]: Mounting media.mount - External Media Directory... May 9 00:39:10.034996 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:39:10.035013 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 00:39:10.035028 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 00:39:10.035048 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 00:39:10.035064 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 00:39:10.035080 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:39:10.035097 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:39:10.035113 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 00:39:10.035132 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:39:10.035149 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:39:10.035166 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:39:10.035182 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 00:39:10.035198 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:39:10.035214 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 00:39:10.035242 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 9 00:39:10.035259 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 9 00:39:10.035279 kernel: fuse: init (API version 7.39) May 9 00:39:10.035295 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:39:10.035311 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:39:10.035327 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 00:39:10.035344 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 00:39:10.035360 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:39:10.035400 systemd-journald[1165]: Collecting audit messages is disabled. May 9 00:39:10.035429 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:39:10.035452 kernel: ACPI: bus type drm_connector registered May 9 00:39:10.035468 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 00:39:10.035484 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 00:39:10.035499 systemd-journald[1165]: Journal started May 9 00:39:10.035529 systemd-journald[1165]: Runtime Journal (/run/log/journal/00caab5558b141b2a622049a8f121d05) is 6.0M, max 48.3M, 42.2M free. May 9 00:39:10.040352 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:39:10.040393 kernel: loop: module loaded May 9 00:39:10.042292 systemd[1]: Mounted media.mount - External Media Directory. May 9 00:39:10.043954 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 00:39:10.045435 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 00:39:10.046980 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 00:39:10.048535 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 00:39:10.050340 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:39:10.051919 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 00:39:10.052154 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 00:39:10.054117 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:39:10.054348 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:39:10.055826 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:39:10.056048 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:39:10.057426 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:39:10.057646 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:39:10.059160 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 00:39:10.059392 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 00:39:10.060799 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:39:10.061048 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:39:10.062510 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:39:10.064016 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 00:39:10.065620 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 00:39:10.079637 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 00:39:10.091849 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 00:39:10.094416 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 00:39:10.095578 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 00:39:10.100447 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 00:39:10.105003 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 00:39:10.106215 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:39:10.110862 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 00:39:10.112041 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:39:10.113552 systemd-journald[1165]: Time spent on flushing to /var/log/journal/00caab5558b141b2a622049a8f121d05 is 22.409ms for 984 entries. May 9 00:39:10.113552 systemd-journald[1165]: System Journal (/var/log/journal/00caab5558b141b2a622049a8f121d05) is 8.0M, max 195.6M, 187.6M free. May 9 00:39:10.143599 systemd-journald[1165]: Received client request to flush runtime journal. May 9 00:39:10.113280 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:39:10.119175 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:39:10.122097 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 00:39:10.124205 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 00:39:10.128781 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:39:10.143024 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 00:39:10.145283 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 00:39:10.147340 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 00:39:10.151539 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 00:39:10.158273 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:39:10.161423 udevadm[1220]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 9 00:39:10.163074 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. May 9 00:39:10.163091 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. May 9 00:39:10.169568 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:39:10.181897 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 00:39:10.205464 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 00:39:10.216974 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:39:10.232711 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. May 9 00:39:10.232874 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. May 9 00:39:10.238467 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:39:10.642839 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 00:39:10.655876 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:39:10.681596 systemd-udevd[1242]: Using default interface naming scheme 'v255'. May 9 00:39:10.697506 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:39:10.708938 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:39:10.715170 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 00:39:10.736245 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. May 9 00:39:10.758906 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1260) May 9 00:39:10.782595 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 00:39:10.782757 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 9 00:39:10.793755 kernel: ACPI: button: Power Button [PWRF] May 9 00:39:10.821863 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 9 00:39:10.833949 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 9 00:39:10.834260 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 9 00:39:10.834467 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 9 00:39:10.834676 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 9 00:39:10.851548 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 00:39:10.866388 systemd-networkd[1249]: lo: Link UP May 9 00:39:10.866402 systemd-networkd[1249]: lo: Gained carrier May 9 00:39:10.868315 systemd-networkd[1249]: Enumeration completed May 9 00:39:10.870750 kernel: mousedev: PS/2 mouse device common for all mice May 9 00:39:10.870837 systemd-networkd[1249]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:39:10.870849 systemd-networkd[1249]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:39:10.871673 systemd-networkd[1249]: eth0: Link UP May 9 00:39:10.871684 systemd-networkd[1249]: eth0: Gained carrier May 9 00:39:10.871699 systemd-networkd[1249]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:39:10.879250 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:39:10.881101 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:39:10.893179 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 00:39:10.928818 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:39:10.929750 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:39:10.933678 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:39:10.934880 systemd-networkd[1249]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:39:10.949026 kernel: kvm_amd: TSC scaling supported May 9 00:39:10.949079 kernel: kvm_amd: Nested Virtualization enabled May 9 00:39:10.949092 kernel: kvm_amd: Nested Paging enabled May 9 00:39:10.949105 kernel: kvm_amd: LBR virtualization supported May 9 00:39:10.950426 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 9 00:39:10.950450 kernel: kvm_amd: Virtual GIF supported May 9 00:39:10.972763 kernel: EDAC MC: Ver: 3.0.0 May 9 00:39:10.999494 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:39:11.003790 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 00:39:11.023929 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 00:39:11.033378 lvm[1294]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:39:11.066629 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 00:39:11.068406 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:39:11.078155 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 00:39:11.083565 lvm[1297]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:39:11.124211 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 00:39:11.125982 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 00:39:11.127479 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 00:39:11.127499 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:39:11.128571 systemd[1]: Reached target machines.target - Containers. May 9 00:39:11.130715 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 9 00:39:11.144893 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 00:39:11.147698 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 00:39:11.148875 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:39:11.149837 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 00:39:11.153992 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 9 00:39:11.157951 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 00:39:11.160246 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 00:39:11.171208 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 00:39:11.172932 kernel: loop0: detected capacity change from 0 to 210664 May 9 00:39:11.182559 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 00:39:11.185183 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 9 00:39:11.192758 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 00:39:11.216771 kernel: loop1: detected capacity change from 0 to 142488 May 9 00:39:11.247401 kernel: loop2: detected capacity change from 0 to 140768 May 9 00:39:11.279759 kernel: loop3: detected capacity change from 0 to 210664 May 9 00:39:11.288768 kernel: loop4: detected capacity change from 0 to 142488 May 9 00:39:11.299771 kernel: loop5: detected capacity change from 0 to 140768 May 9 00:39:11.308241 (sd-merge)[1317]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 9 00:39:11.308935 (sd-merge)[1317]: Merged extensions into '/usr'. May 9 00:39:11.314521 systemd[1]: Reloading requested from client PID 1305 ('systemd-sysext') (unit systemd-sysext.service)... May 9 00:39:11.314541 systemd[1]: Reloading... May 9 00:39:11.370484 zram_generator::config[1344]: No configuration found. May 9 00:39:11.414826 ldconfig[1301]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 00:39:11.489470 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:39:11.553910 systemd[1]: Reloading finished in 238 ms. May 9 00:39:11.575812 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 00:39:11.577549 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 00:39:11.594019 systemd[1]: Starting ensure-sysext.service... May 9 00:39:11.596542 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:39:11.603157 systemd[1]: Reloading requested from client PID 1389 ('systemctl') (unit ensure-sysext.service)... May 9 00:39:11.603177 systemd[1]: Reloading... May 9 00:39:11.627833 systemd-tmpfiles[1390]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 00:39:11.628379 systemd-tmpfiles[1390]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 00:39:11.629884 systemd-tmpfiles[1390]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 00:39:11.630305 systemd-tmpfiles[1390]: ACLs are not supported, ignoring. May 9 00:39:11.630415 systemd-tmpfiles[1390]: ACLs are not supported, ignoring. May 9 00:39:11.638097 systemd-tmpfiles[1390]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:39:11.638112 systemd-tmpfiles[1390]: Skipping /boot May 9 00:39:11.653978 systemd-tmpfiles[1390]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:39:11.653997 systemd-tmpfiles[1390]: Skipping /boot May 9 00:39:11.659869 zram_generator::config[1421]: No configuration found. May 9 00:39:11.794248 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:39:11.864462 systemd[1]: Reloading finished in 260 ms. May 9 00:39:11.886747 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:39:11.901971 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 00:39:11.905042 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 00:39:11.907939 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 00:39:11.912885 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:39:11.917956 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 00:39:11.924274 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:39:11.924931 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:39:11.929073 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:39:11.933356 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:39:11.940008 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:39:11.941397 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:39:11.941543 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:39:11.945180 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:39:11.945475 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:39:11.947585 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:39:11.947938 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:39:11.951415 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:39:11.951628 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:39:11.953638 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 00:39:11.962626 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 00:39:11.969573 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:39:11.970046 augenrules[1498]: No rules May 9 00:39:11.970349 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:39:11.980014 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:39:11.983397 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:39:11.987231 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:39:11.993009 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:39:11.994337 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:39:12.008025 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 00:39:12.009461 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:39:12.011627 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 00:39:12.013327 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 00:39:12.015011 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:39:12.015358 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:39:12.017037 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:39:12.017253 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:39:12.018537 systemd-resolved[1467]: Positive Trust Anchors: May 9 00:39:12.018551 systemd-resolved[1467]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:39:12.018582 systemd-resolved[1467]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:39:12.018763 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:39:12.018962 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:39:12.020697 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:39:12.020915 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:39:12.022658 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 00:39:12.027002 systemd-resolved[1467]: Defaulting to hostname 'linux'. May 9 00:39:12.027427 systemd[1]: Finished ensure-sysext.service. May 9 00:39:12.030498 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:39:12.036249 systemd[1]: Reached target network.target - Network. May 9 00:39:12.037266 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:39:12.038496 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:39:12.038558 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:39:12.046897 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 9 00:39:12.048112 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 00:39:12.081888 systemd-networkd[1249]: eth0: Gained IPv6LL May 9 00:39:12.085057 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 00:39:12.086989 systemd[1]: Reached target network-online.target - Network is Online. May 9 00:39:12.108611 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 9 00:39:12.109923 systemd-timesyncd[1525]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 9 00:39:12.109973 systemd-timesyncd[1525]: Initial clock synchronization to Fri 2025-05-09 00:39:12.145255 UTC. May 9 00:39:12.110397 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:39:12.111621 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 00:39:12.112962 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 00:39:12.114275 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 00:39:12.115587 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 00:39:12.115612 systemd[1]: Reached target paths.target - Path Units. May 9 00:39:12.116568 systemd[1]: Reached target time-set.target - System Time Set. May 9 00:39:12.117816 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 00:39:12.119031 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 00:39:12.120320 systemd[1]: Reached target timers.target - Timer Units. May 9 00:39:12.122032 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 00:39:12.124974 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 00:39:12.127310 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 00:39:12.133793 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 00:39:12.134946 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:39:12.135950 systemd[1]: Reached target basic.target - Basic System. May 9 00:39:12.137077 systemd[1]: System is tainted: cgroupsv1 May 9 00:39:12.137112 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 00:39:12.137134 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 00:39:12.138276 systemd[1]: Starting containerd.service - containerd container runtime... May 9 00:39:12.140465 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 9 00:39:12.142584 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 00:39:12.146810 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 00:39:12.149120 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 00:39:12.152911 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 00:39:12.154379 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:39:12.156118 jq[1534]: false May 9 00:39:12.156940 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 00:39:12.162069 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 00:39:12.165896 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 9 00:39:12.172378 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 00:39:12.178320 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 00:39:12.184089 extend-filesystems[1537]: Found loop3 May 9 00:39:12.184089 extend-filesystems[1537]: Found loop4 May 9 00:39:12.184089 extend-filesystems[1537]: Found loop5 May 9 00:39:12.184089 extend-filesystems[1537]: Found sr0 May 9 00:39:12.184089 extend-filesystems[1537]: Found vda May 9 00:39:12.184089 extend-filesystems[1537]: Found vda1 May 9 00:39:12.184089 extend-filesystems[1537]: Found vda2 May 9 00:39:12.184089 extend-filesystems[1537]: Found vda3 May 9 00:39:12.184089 extend-filesystems[1537]: Found usr May 9 00:39:12.184089 extend-filesystems[1537]: Found vda4 May 9 00:39:12.184089 extend-filesystems[1537]: Found vda6 May 9 00:39:12.184089 extend-filesystems[1537]: Found vda7 May 9 00:39:12.184089 extend-filesystems[1537]: Found vda9 May 9 00:39:12.183986 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 00:39:12.204312 extend-filesystems[1537]: Checking size of /dev/vda9 May 9 00:39:12.185416 dbus-daemon[1533]: [system] SELinux support is enabled May 9 00:39:12.186744 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 00:39:12.193007 systemd[1]: Starting update-engine.service - Update Engine... May 9 00:39:12.198665 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 00:39:12.212331 jq[1562]: true May 9 00:39:12.206274 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 00:39:12.219956 extend-filesystems[1537]: Resized partition /dev/vda9 May 9 00:39:12.222000 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 00:39:12.222325 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 00:39:12.226309 update_engine[1560]: I20250509 00:39:12.225917 1560 main.cc:92] Flatcar Update Engine starting May 9 00:39:12.227197 systemd[1]: motdgen.service: Deactivated successfully. May 9 00:39:12.227323 extend-filesystems[1573]: resize2fs 1.47.1 (20-May-2024) May 9 00:39:12.228135 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 00:39:12.229835 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 9 00:39:12.229880 update_engine[1560]: I20250509 00:39:12.229677 1560 update_check_scheduler.cc:74] Next update check in 9m31s May 9 00:39:12.231612 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 00:39:12.234540 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 00:39:12.235263 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 00:39:12.252749 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1250) May 9 00:39:12.266223 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 9 00:39:12.269299 (ntainerd)[1580]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 00:39:12.269637 jq[1579]: true May 9 00:39:12.273303 systemd[1]: coreos-metadata.service: Deactivated successfully. May 9 00:39:12.273621 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 9 00:39:12.293048 tar[1577]: linux-amd64/helm May 9 00:39:12.293493 systemd-logind[1554]: Watching system buttons on /dev/input/event1 (Power Button) May 9 00:39:12.293517 systemd-logind[1554]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 9 00:39:12.295141 systemd-logind[1554]: New seat seat0. May 9 00:39:12.295320 systemd[1]: Started update-engine.service - Update Engine. May 9 00:39:12.297328 extend-filesystems[1573]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 9 00:39:12.297328 extend-filesystems[1573]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 00:39:12.297328 extend-filesystems[1573]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 9 00:39:12.302093 extend-filesystems[1537]: Resized filesystem in /dev/vda9 May 9 00:39:12.299025 systemd[1]: Started systemd-logind.service - User Login Management. May 9 00:39:12.307307 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 00:39:12.307625 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 00:39:12.319060 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 00:39:12.319379 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 00:39:12.319558 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 00:39:12.322906 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 00:39:12.323069 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 00:39:12.326335 bash[1614]: Updated "/home/core/.ssh/authorized_keys" May 9 00:39:12.326990 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 00:39:12.337084 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 00:39:12.343548 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 00:39:12.348906 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 9 00:39:12.389369 locksmithd[1618]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 00:39:12.423076 sshd_keygen[1570]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 00:39:12.490911 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 00:39:12.500928 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 00:39:12.510219 systemd[1]: issuegen.service: Deactivated successfully. May 9 00:39:12.510528 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 00:39:12.563167 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 00:39:12.587834 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 00:39:12.634202 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 00:39:12.641688 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 9 00:39:12.643116 systemd[1]: Reached target getty.target - Login Prompts. May 9 00:39:12.739377 containerd[1580]: time="2025-05-09T00:39:12.739289496Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 9 00:39:12.777872 containerd[1580]: time="2025-05-09T00:39:12.777799791Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 9 00:39:12.780758 containerd[1580]: time="2025-05-09T00:39:12.779652426Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 9 00:39:12.780758 containerd[1580]: time="2025-05-09T00:39:12.779685618Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 9 00:39:12.780758 containerd[1580]: time="2025-05-09T00:39:12.779703712Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 9 00:39:12.780758 containerd[1580]: time="2025-05-09T00:39:12.779939103Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 9 00:39:12.780758 containerd[1580]: time="2025-05-09T00:39:12.779964581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 9 00:39:12.780758 containerd[1580]: time="2025-05-09T00:39:12.780046886Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:39:12.780758 containerd[1580]: time="2025-05-09T00:39:12.780062315Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 9 00:39:12.780758 containerd[1580]: time="2025-05-09T00:39:12.780394688Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:39:12.780758 containerd[1580]: time="2025-05-09T00:39:12.780415026Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 9 00:39:12.780758 containerd[1580]: time="2025-05-09T00:39:12.780430806Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:39:12.780758 containerd[1580]: time="2025-05-09T00:39:12.780443470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 9 00:39:12.781048 containerd[1580]: time="2025-05-09T00:39:12.780580517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 9 00:39:12.781282 containerd[1580]: time="2025-05-09T00:39:12.781261554Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 9 00:39:12.781569 containerd[1580]: time="2025-05-09T00:39:12.781546318Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:39:12.781643 containerd[1580]: time="2025-05-09T00:39:12.781627991Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 9 00:39:12.781847 containerd[1580]: time="2025-05-09T00:39:12.781827806Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 9 00:39:12.781977 containerd[1580]: time="2025-05-09T00:39:12.781960275Z" level=info msg="metadata content store policy set" policy=shared May 9 00:39:12.788230 containerd[1580]: time="2025-05-09T00:39:12.788203605Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 9 00:39:12.788345 containerd[1580]: time="2025-05-09T00:39:12.788327237Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 9 00:39:12.788449 containerd[1580]: time="2025-05-09T00:39:12.788437133Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 9 00:39:12.788564 containerd[1580]: time="2025-05-09T00:39:12.788531350Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 9 00:39:12.788649 containerd[1580]: time="2025-05-09T00:39:12.788636577Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 9 00:39:12.788917 containerd[1580]: time="2025-05-09T00:39:12.788898609Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 9 00:39:12.789474 containerd[1580]: time="2025-05-09T00:39:12.789444513Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 9 00:39:12.789717 containerd[1580]: time="2025-05-09T00:39:12.789700994Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 9 00:39:12.789788 containerd[1580]: time="2025-05-09T00:39:12.789776025Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 9 00:39:12.789871 containerd[1580]: time="2025-05-09T00:39:12.789857648Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 9 00:39:12.789921 containerd[1580]: time="2025-05-09T00:39:12.789910727Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 9 00:39:12.789970 containerd[1580]: time="2025-05-09T00:39:12.789959439Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 9 00:39:12.790023 containerd[1580]: time="2025-05-09T00:39:12.790011827Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 9 00:39:12.790091 containerd[1580]: time="2025-05-09T00:39:12.790074995Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 9 00:39:12.790158 containerd[1580]: time="2025-05-09T00:39:12.790135289Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 9 00:39:12.790214 containerd[1580]: time="2025-05-09T00:39:12.790202625Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 9 00:39:12.790270 containerd[1580]: time="2025-05-09T00:39:12.790258159Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 9 00:39:12.790327 containerd[1580]: time="2025-05-09T00:39:12.790315547Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 9 00:39:12.790394 containerd[1580]: time="2025-05-09T00:39:12.790381030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 9 00:39:12.790451 containerd[1580]: time="2025-05-09T00:39:12.790432125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 9 00:39:12.790509 containerd[1580]: time="2025-05-09T00:39:12.790496165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 9 00:39:12.790611 containerd[1580]: time="2025-05-09T00:39:12.790592987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 9 00:39:12.790684 containerd[1580]: time="2025-05-09T00:39:12.790671715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 9 00:39:12.790756 containerd[1580]: time="2025-05-09T00:39:12.790743850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 9 00:39:12.790816 containerd[1580]: time="2025-05-09T00:39:12.790804183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 9 00:39:12.790864 containerd[1580]: time="2025-05-09T00:39:12.790853255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 9 00:39:12.790919 containerd[1580]: time="2025-05-09T00:39:12.790907818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 9 00:39:12.790969 containerd[1580]: time="2025-05-09T00:39:12.790958242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 9 00:39:12.791023 containerd[1580]: time="2025-05-09T00:39:12.791012304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 9 00:39:12.791069 containerd[1580]: time="2025-05-09T00:39:12.791059041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 9 00:39:12.791132 containerd[1580]: time="2025-05-09T00:39:12.791120036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 9 00:39:12.791199 containerd[1580]: time="2025-05-09T00:39:12.791186601Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 9 00:39:12.791356 containerd[1580]: time="2025-05-09T00:39:12.791246693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 9 00:39:12.791356 containerd[1580]: time="2025-05-09T00:39:12.791261180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 9 00:39:12.791356 containerd[1580]: time="2025-05-09T00:39:12.791271129Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 9 00:39:12.791538 containerd[1580]: time="2025-05-09T00:39:12.791485511Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 9 00:39:12.791538 containerd[1580]: time="2025-05-09T00:39:12.791510719Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 9 00:39:12.793145 containerd[1580]: time="2025-05-09T00:39:12.791521449Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 9 00:39:12.793259 containerd[1580]: time="2025-05-09T00:39:12.793236526Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 9 00:39:12.793342 containerd[1580]: time="2025-05-09T00:39:12.793322838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 9 00:39:12.793412 containerd[1580]: time="2025-05-09T00:39:12.793395744Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 9 00:39:12.793471 containerd[1580]: time="2025-05-09T00:39:12.793458061Z" level=info msg="NRI interface is disabled by configuration." May 9 00:39:12.793551 containerd[1580]: time="2025-05-09T00:39:12.793534505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 9 00:39:12.794065 containerd[1580]: time="2025-05-09T00:39:12.793989087Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 9 00:39:12.794549 containerd[1580]: time="2025-05-09T00:39:12.794512118Z" level=info msg="Connect containerd service" May 9 00:39:12.794712 containerd[1580]: time="2025-05-09T00:39:12.794673812Z" level=info msg="using legacy CRI server" May 9 00:39:12.794818 containerd[1580]: time="2025-05-09T00:39:12.794796822Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 00:39:12.795075 containerd[1580]: time="2025-05-09T00:39:12.795054516Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 9 00:39:12.796116 containerd[1580]: time="2025-05-09T00:39:12.796085179Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 00:39:12.796440 containerd[1580]: time="2025-05-09T00:39:12.796351318Z" level=info msg="Start subscribing containerd event" May 9 00:39:12.796498 containerd[1580]: time="2025-05-09T00:39:12.796466324Z" level=info msg="Start recovering state" May 9 00:39:12.796598 containerd[1580]: time="2025-05-09T00:39:12.796572994Z" level=info msg="Start event monitor" May 9 00:39:12.796645 containerd[1580]: time="2025-05-09T00:39:12.796617047Z" level=info msg="Start snapshots syncer" May 9 00:39:12.796645 containerd[1580]: time="2025-05-09T00:39:12.796632846Z" level=info msg="Start cni network conf syncer for default" May 9 00:39:12.796645 containerd[1580]: time="2025-05-09T00:39:12.796641773Z" level=info msg="Start streaming server" May 9 00:39:12.797099 containerd[1580]: time="2025-05-09T00:39:12.797077811Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 00:39:12.797326 containerd[1580]: time="2025-05-09T00:39:12.797311078Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 00:39:12.797682 containerd[1580]: time="2025-05-09T00:39:12.797633563Z" level=info msg="containerd successfully booted in 0.060529s" May 9 00:39:12.797850 systemd[1]: Started containerd.service - containerd container runtime. May 9 00:39:13.019442 tar[1577]: linux-amd64/LICENSE May 9 00:39:13.019442 tar[1577]: linux-amd64/README.md May 9 00:39:13.035439 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 9 00:39:13.732260 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:39:13.734252 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 00:39:13.736191 systemd[1]: Startup finished in 6.313s (kernel) + 4.564s (userspace) = 10.878s. May 9 00:39:13.738638 (kubelet)[1667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:39:14.479049 kubelet[1667]: E0509 00:39:14.478937 1667 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:39:14.483082 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:39:14.483372 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:39:17.204959 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 00:39:17.213038 systemd[1]: Started sshd@0-10.0.0.133:22-10.0.0.1:49932.service - OpenSSH per-connection server daemon (10.0.0.1:49932). May 9 00:39:17.250823 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 49932 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:39:17.252700 sshd[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:39:17.261274 systemd-logind[1554]: New session 1 of user core. May 9 00:39:17.262333 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 00:39:17.270943 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 00:39:17.283629 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 00:39:17.286111 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 00:39:17.295471 (systemd)[1686]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 00:39:17.429704 systemd[1686]: Queued start job for default target default.target. May 9 00:39:17.430107 systemd[1686]: Created slice app.slice - User Application Slice. May 9 00:39:17.430128 systemd[1686]: Reached target paths.target - Paths. May 9 00:39:17.430141 systemd[1686]: Reached target timers.target - Timers. May 9 00:39:17.446835 systemd[1686]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 00:39:17.454194 systemd[1686]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 00:39:17.454267 systemd[1686]: Reached target sockets.target - Sockets. May 9 00:39:17.454280 systemd[1686]: Reached target basic.target - Basic System. May 9 00:39:17.454320 systemd[1686]: Reached target default.target - Main User Target. May 9 00:39:17.454354 systemd[1686]: Startup finished in 150ms. May 9 00:39:17.454901 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 00:39:17.456542 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 00:39:17.515239 systemd[1]: Started sshd@1-10.0.0.133:22-10.0.0.1:49948.service - OpenSSH per-connection server daemon (10.0.0.1:49948). May 9 00:39:17.548168 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 49948 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:39:17.549705 sshd[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:39:17.553753 systemd-logind[1554]: New session 2 of user core. May 9 00:39:17.563016 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 00:39:17.617788 sshd[1699]: pam_unix(sshd:session): session closed for user core May 9 00:39:17.625965 systemd[1]: Started sshd@2-10.0.0.133:22-10.0.0.1:49964.service - OpenSSH per-connection server daemon (10.0.0.1:49964). May 9 00:39:17.626443 systemd[1]: sshd@1-10.0.0.133:22-10.0.0.1:49948.service: Deactivated successfully. May 9 00:39:17.628799 systemd-logind[1554]: Session 2 logged out. Waiting for processes to exit. May 9 00:39:17.629956 systemd[1]: session-2.scope: Deactivated successfully. May 9 00:39:17.631149 systemd-logind[1554]: Removed session 2. May 9 00:39:17.659078 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 49964 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:39:17.660568 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:39:17.665062 systemd-logind[1554]: New session 3 of user core. May 9 00:39:17.675093 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 00:39:17.725760 sshd[1704]: pam_unix(sshd:session): session closed for user core May 9 00:39:17.734968 systemd[1]: Started sshd@3-10.0.0.133:22-10.0.0.1:49974.service - OpenSSH per-connection server daemon (10.0.0.1:49974). May 9 00:39:17.735630 systemd[1]: sshd@2-10.0.0.133:22-10.0.0.1:49964.service: Deactivated successfully. May 9 00:39:17.738037 systemd-logind[1554]: Session 3 logged out. Waiting for processes to exit. May 9 00:39:17.739471 systemd[1]: session-3.scope: Deactivated successfully. May 9 00:39:17.740427 systemd-logind[1554]: Removed session 3. May 9 00:39:17.768243 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 49974 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:39:17.769897 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:39:17.774065 systemd-logind[1554]: New session 4 of user core. May 9 00:39:17.788143 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 00:39:17.843085 sshd[1712]: pam_unix(sshd:session): session closed for user core May 9 00:39:17.855044 systemd[1]: Started sshd@4-10.0.0.133:22-10.0.0.1:49980.service - OpenSSH per-connection server daemon (10.0.0.1:49980). May 9 00:39:17.855661 systemd[1]: sshd@3-10.0.0.133:22-10.0.0.1:49974.service: Deactivated successfully. May 9 00:39:17.857537 systemd[1]: session-4.scope: Deactivated successfully. May 9 00:39:17.858296 systemd-logind[1554]: Session 4 logged out. Waiting for processes to exit. May 9 00:39:17.859878 systemd-logind[1554]: Removed session 4. May 9 00:39:17.887663 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 49980 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:39:17.889315 sshd[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:39:17.894256 systemd-logind[1554]: New session 5 of user core. May 9 00:39:17.909250 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 00:39:17.968127 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 00:39:17.968533 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:39:18.829991 systemd[1]: Starting docker.service - Docker Application Container Engine... May 9 00:39:18.830389 (dockerd)[1745]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 9 00:39:19.996544 dockerd[1745]: time="2025-05-09T00:39:19.996451253Z" level=info msg="Starting up" May 9 00:39:20.721235 dockerd[1745]: time="2025-05-09T00:39:20.721182394Z" level=info msg="Loading containers: start." May 9 00:39:20.830770 kernel: Initializing XFRM netlink socket May 9 00:39:20.910302 systemd-networkd[1249]: docker0: Link UP May 9 00:39:20.933634 dockerd[1745]: time="2025-05-09T00:39:20.933577626Z" level=info msg="Loading containers: done." May 9 00:39:20.951677 dockerd[1745]: time="2025-05-09T00:39:20.951594681Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 9 00:39:20.951892 dockerd[1745]: time="2025-05-09T00:39:20.951773086Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 9 00:39:20.951962 dockerd[1745]: time="2025-05-09T00:39:20.951935047Z" level=info msg="Daemon has completed initialization" May 9 00:39:21.003232 dockerd[1745]: time="2025-05-09T00:39:21.003050266Z" level=info msg="API listen on /run/docker.sock" May 9 00:39:21.003281 systemd[1]: Started docker.service - Docker Application Container Engine. May 9 00:39:21.831090 containerd[1580]: time="2025-05-09T00:39:21.831028625Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 9 00:39:22.399860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1675345548.mount: Deactivated successfully. May 9 00:39:23.395693 containerd[1580]: time="2025-05-09T00:39:23.395637919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:39:23.396261 containerd[1580]: time="2025-05-09T00:39:23.396212432Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 9 00:39:23.397406 containerd[1580]: time="2025-05-09T00:39:23.397376833Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:39:23.400514 containerd[1580]: time="2025-05-09T00:39:23.400470928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:39:23.401525 containerd[1580]: time="2025-05-09T00:39:23.401467110Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 1.570375106s" May 9 00:39:23.401585 containerd[1580]: time="2025-05-09T00:39:23.401532146Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 9 00:39:23.431720 containerd[1580]: time="2025-05-09T00:39:23.431676816Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 9 00:39:24.760536 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 9 00:39:24.770897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:39:24.968578 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:39:24.974489 (kubelet)[1973]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:39:25.044211 kubelet[1973]: E0509 00:39:25.044021 1973 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:39:25.051544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:39:25.051987 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:39:26.160433 containerd[1580]: time="2025-05-09T00:39:26.160362023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:39:26.161668 containerd[1580]: time="2025-05-09T00:39:26.161616923Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 9 00:39:26.164668 containerd[1580]: time="2025-05-09T00:39:26.164635083Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:39:26.167569 containerd[1580]: time="2025-05-09T00:39:26.167536572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:39:26.168514 containerd[1580]: time="2025-05-09T00:39:26.168479982Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.736765835s" May 9 00:39:26.168561 containerd[1580]: time="2025-05-09T00:39:26.168523074Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 9 00:39:26.192548 containerd[1580]: time="2025-05-09T00:39:26.192501941Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 9 00:39:27.430064 containerd[1580]: time="2025-05-09T00:39:27.429992121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:39:27.431579 containerd[1580]: time="2025-05-09T00:39:27.431529259Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 9 00:39:27.433304 containerd[1580]: time="2025-05-09T00:39:27.433255661Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:39:27.436789 containerd[1580]: time="2025-05-09T00:39:27.436750130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:39:27.438362 containerd[1580]: time="2025-05-09T00:39:27.438321803Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.245770252s" May 9 00:39:27.438423 containerd[1580]: time="2025-05-09T00:39:27.438362547Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 9 00:39:27.462703 containerd[1580]: time="2025-05-09T00:39:27.462658304Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 9 00:39:29.177043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3617800384.mount: Deactivated successfully. May 9 00:39:30.356405 containerd[1580]: time="2025-05-09T00:39:30.356318936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:39:30.356985 containerd[1580]: time="2025-05-09T00:39:30.356894048Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 9 00:39:30.358124 containerd[1580]: time="2025-05-09T00:39:30.358083928Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:39:30.360252 containerd[1580]: time="2025-05-09T00:39:30.360173878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:39:30.360882 containerd[1580]: time="2025-05-09T00:39:30.360831345Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 2.89812846s" May 9 00:39:30.360882 containerd[1580]: time="2025-05-09T00:39:30.360879130Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 9 00:39:30.390304 containerd[1580]: time="2025-05-09T00:39:30.390263220Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 9 00:39:30.989443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3083158921.mount: Deactivated successfully. May 9 00:39:32.745664 containerd[1580]: time="2025-05-09T00:39:32.745501577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:39:32.751543 containerd[1580]: time="2025-05-09T00:39:32.751423531Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 9 00:39:32.759133 containerd[1580]: time="2025-05-09T00:39:32.758954656Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:39:32.768941 containerd[1580]: time="2025-05-09T00:39:32.768775999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:39:32.771056 containerd[1580]: time="2025-05-09T00:39:32.770965264Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.380647723s" May 9 00:39:32.771056 containerd[1580]: time="2025-05-09T00:39:32.771036012Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 9 00:39:32.814263 containerd[1580]: time="2025-05-09T00:39:32.814146671Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 9 00:39:33.484816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount213574565.mount: Deactivated successfully. May 9 00:39:33.507507 containerd[1580]: time="2025-05-09T00:39:33.507208153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:39:33.509512 containerd[1580]: time="2025-05-09T00:39:33.509404085Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 9 00:39:33.511938 containerd[1580]: time="2025-05-09T00:39:33.511862450Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:39:33.514808 containerd[1580]: time="2025-05-09T00:39:33.514743975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:39:33.515769 containerd[1580]: time="2025-05-09T00:39:33.515697251Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 701.487813ms" May 9 00:39:33.515848 containerd[1580]: time="2025-05-09T00:39:33.515770052Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 9 00:39:33.545844 containerd[1580]: time="2025-05-09T00:39:33.545797661Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 9 00:39:34.138565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1326960438.mount: Deactivated successfully. May 9 00:39:35.302140 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 9 00:39:35.327173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:39:35.505807 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:39:35.512987 (kubelet)[2135]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:39:35.792268 kubelet[2135]: E0509 00:39:35.792090 2135 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:39:35.795920 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:39:35.796189 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:39:36.504109 containerd[1580]: time="2025-05-09T00:39:36.504017173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:39:36.504979 containerd[1580]: time="2025-05-09T00:39:36.504952234Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 9 00:39:36.506632 containerd[1580]: time="2025-05-09T00:39:36.506584513Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:39:36.509386 containerd[1580]: time="2025-05-09T00:39:36.509352781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:39:36.510500 containerd[1580]: time="2025-05-09T00:39:36.510465485Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.964440862s" May 9 00:39:36.510538 containerd[1580]: time="2025-05-09T00:39:36.510501099Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 9 00:39:39.119789 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:39:39.127939 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:39:39.145093 systemd[1]: Reloading requested from client PID 2230 ('systemctl') (unit session-5.scope)... May 9 00:39:39.145113 systemd[1]: Reloading... May 9 00:39:39.223963 zram_generator::config[2269]: No configuration found. May 9 00:39:39.684034 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:39:39.764646 systemd[1]: Reloading finished in 619 ms. May 9 00:39:39.812902 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 9 00:39:39.813016 systemd[1]: kubelet.service: Failed with result 'signal'. May 9 00:39:39.813421 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:39:39.828110 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:39:39.967411 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:39:39.973602 (kubelet)[2329]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:39:40.013273 kubelet[2329]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:39:40.013273 kubelet[2329]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 00:39:40.013273 kubelet[2329]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:39:40.013720 kubelet[2329]: I0509 00:39:40.013347 2329 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:39:40.693910 kubelet[2329]: I0509 00:39:40.693868 2329 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 9 00:39:40.693910 kubelet[2329]: I0509 00:39:40.693898 2329 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:39:40.694119 kubelet[2329]: I0509 00:39:40.694109 2329 server.go:927] "Client rotation is on, will bootstrap in background" May 9 00:39:40.710344 kubelet[2329]: I0509 00:39:40.710302 2329 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:39:40.711079 kubelet[2329]: E0509 00:39:40.711060 2329 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.133:6443: connect: connection refused May 9 00:39:40.723829 kubelet[2329]: I0509 00:39:40.723797 2329 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:39:40.724316 kubelet[2329]: I0509 00:39:40.724281 2329 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:39:40.724512 kubelet[2329]: I0509 00:39:40.724306 2329 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 9 00:39:40.725029 kubelet[2329]: I0509 00:39:40.724997 2329 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:39:40.725029 kubelet[2329]: I0509 00:39:40.725027 2329 container_manager_linux.go:301] "Creating device plugin manager" May 9 00:39:40.725261 kubelet[2329]: I0509 00:39:40.725238 2329 state_mem.go:36] "Initialized new in-memory state store" May 9 00:39:40.725942 kubelet[2329]: I0509 00:39:40.725920 2329 kubelet.go:400] "Attempting to sync node with API server" May 9 00:39:40.725942 kubelet[2329]: I0509 00:39:40.725939 2329 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:39:40.726000 kubelet[2329]: I0509 00:39:40.725979 2329 kubelet.go:312] "Adding apiserver pod source" May 9 00:39:40.726033 kubelet[2329]: I0509 00:39:40.726007 2329 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:39:40.729320 kubelet[2329]: W0509 00:39:40.729270 2329 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 9 00:39:40.729366 kubelet[2329]: E0509 00:39:40.729355 2329 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 9 00:39:40.729543 kubelet[2329]: W0509 00:39:40.729467 2329 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 9 00:39:40.729582 kubelet[2329]: E0509 00:39:40.729546 2329 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 9 00:39:40.730885 kubelet[2329]: I0509 00:39:40.730853 2329 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 9 00:39:40.732761 kubelet[2329]: I0509 00:39:40.732706 2329 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:39:40.732866 kubelet[2329]: W0509 00:39:40.732833 2329 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 00:39:40.734004 kubelet[2329]: I0509 00:39:40.733868 2329 server.go:1264] "Started kubelet" May 9 00:39:40.734279 kubelet[2329]: I0509 00:39:40.734254 2329 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:39:40.734816 kubelet[2329]: I0509 00:39:40.734766 2329 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:39:40.735263 kubelet[2329]: I0509 00:39:40.735238 2329 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:39:40.736580 kubelet[2329]: I0509 00:39:40.735437 2329 server.go:455] "Adding debug handlers to kubelet server" May 9 00:39:40.736752 kubelet[2329]: I0509 00:39:40.736719 2329 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:39:40.739003 kubelet[2329]: E0509 00:39:40.738813 2329 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.133:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.133:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183db4f1b6a26835 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-09 00:39:40.733835317 +0000 UTC m=+0.756000119,LastTimestamp:2025-05-09 00:39:40.733835317 +0000 UTC m=+0.756000119,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 9 00:39:40.739328 kubelet[2329]: E0509 00:39:40.739311 2329 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 00:39:40.739395 kubelet[2329]: E0509 00:39:40.739384 2329 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:39:40.739455 kubelet[2329]: I0509 00:39:40.739426 2329 volume_manager.go:291] "Starting Kubelet Volume Manager" May 9 00:39:40.739523 kubelet[2329]: I0509 00:39:40.739508 2329 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 00:39:40.739559 kubelet[2329]: I0509 00:39:40.739554 2329 reconciler.go:26] "Reconciler: start to sync state" May 9 00:39:40.740029 kubelet[2329]: E0509 00:39:40.739759 2329 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="200ms" May 9 00:39:40.740184 kubelet[2329]: W0509 00:39:40.740133 2329 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 9 00:39:40.740232 kubelet[2329]: E0509 00:39:40.740193 2329 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 9 00:39:40.740311 kubelet[2329]: I0509 00:39:40.740292 2329 factory.go:221] Registration of the systemd container factory successfully May 9 00:39:40.740389 kubelet[2329]: I0509 00:39:40.740373 2329 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:39:40.743631 kubelet[2329]: I0509 00:39:40.743596 2329 factory.go:221] Registration of the containerd container factory successfully May 9 00:39:40.760899 kubelet[2329]: I0509 00:39:40.760723 2329 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:39:40.762944 kubelet[2329]: I0509 00:39:40.762921 2329 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:39:40.763006 kubelet[2329]: I0509 00:39:40.762962 2329 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 00:39:40.763041 kubelet[2329]: I0509 00:39:40.763005 2329 kubelet.go:2337] "Starting kubelet main sync loop" May 9 00:39:40.763078 kubelet[2329]: E0509 00:39:40.763051 2329 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:39:40.764050 kubelet[2329]: W0509 00:39:40.763941 2329 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 9 00:39:40.764050 kubelet[2329]: E0509 00:39:40.764022 2329 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 9 00:39:40.771875 kubelet[2329]: I0509 00:39:40.771843 2329 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 00:39:40.771875 kubelet[2329]: I0509 00:39:40.771858 2329 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 00:39:40.771971 kubelet[2329]: I0509 00:39:40.771886 2329 state_mem.go:36] "Initialized new in-memory state store" May 9 00:39:40.841867 kubelet[2329]: I0509 00:39:40.841817 2329 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 00:39:40.842327 kubelet[2329]: E0509 00:39:40.842290 2329 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" May 9 00:39:40.863426 kubelet[2329]: E0509 00:39:40.863385 2329 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 9 00:39:40.941125 kubelet[2329]: E0509 00:39:40.941080 2329 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="400ms" May 9 00:39:41.044021 kubelet[2329]: I0509 00:39:41.043877 2329 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 00:39:41.044490 kubelet[2329]: E0509 00:39:41.044265 2329 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" May 9 00:39:41.064535 kubelet[2329]: E0509 00:39:41.064437 2329 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 9 00:39:41.259084 kubelet[2329]: I0509 00:39:41.259022 2329 policy_none.go:49] "None policy: Start" May 9 00:39:41.260274 kubelet[2329]: I0509 00:39:41.260249 2329 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 00:39:41.260274 kubelet[2329]: I0509 00:39:41.260277 2329 state_mem.go:35] "Initializing new in-memory state store" May 9 00:39:41.290306 kubelet[2329]: I0509 00:39:41.290275 2329 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:39:41.290585 kubelet[2329]: I0509 00:39:41.290541 2329 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:39:41.290715 kubelet[2329]: I0509 00:39:41.290705 2329 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:39:41.292964 kubelet[2329]: E0509 00:39:41.292931 2329 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 9 00:39:41.341989 kubelet[2329]: E0509 00:39:41.341850 2329 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="800ms" May 9 00:39:41.446043 kubelet[2329]: I0509 00:39:41.445990 2329 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 00:39:41.446524 kubelet[2329]: E0509 00:39:41.446490 2329 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" May 9 00:39:41.464669 kubelet[2329]: I0509 00:39:41.464596 2329 topology_manager.go:215] "Topology Admit Handler" podUID="354e06284f526a2df40b08d3293424b9" podNamespace="kube-system" podName="kube-apiserver-localhost" May 9 00:39:41.466005 kubelet[2329]: I0509 00:39:41.465983 2329 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 9 00:39:41.466620 kubelet[2329]: I0509 00:39:41.466606 2329 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 9 00:39:41.545390 kubelet[2329]: I0509 00:39:41.545324 2329 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:39:41.545390 kubelet[2329]: I0509 00:39:41.545376 2329 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/354e06284f526a2df40b08d3293424b9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"354e06284f526a2df40b08d3293424b9\") " pod="kube-system/kube-apiserver-localhost" May 9 00:39:41.545390 kubelet[2329]: I0509 00:39:41.545398 2329 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/354e06284f526a2df40b08d3293424b9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"354e06284f526a2df40b08d3293424b9\") " pod="kube-system/kube-apiserver-localhost" May 9 00:39:41.545613 kubelet[2329]: I0509 00:39:41.545412 2329 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/354e06284f526a2df40b08d3293424b9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"354e06284f526a2df40b08d3293424b9\") " pod="kube-system/kube-apiserver-localhost" May 9 00:39:41.545613 kubelet[2329]: I0509 00:39:41.545433 2329 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:39:41.545613 kubelet[2329]: I0509 00:39:41.545453 2329 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:39:41.545613 kubelet[2329]: I0509 00:39:41.545468 2329 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:39:41.545613 kubelet[2329]: I0509 00:39:41.545485 2329 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:39:41.545783 kubelet[2329]: I0509 00:39:41.545502 2329 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 9 00:39:41.676895 kubelet[2329]: W0509 00:39:41.676677 2329 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 9 00:39:41.676895 kubelet[2329]: E0509 00:39:41.676774 2329 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 9 00:39:41.772026 kubelet[2329]: E0509 00:39:41.771979 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:41.772688 kubelet[2329]: E0509 00:39:41.772401 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:41.772845 containerd[1580]: time="2025-05-09T00:39:41.772625696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:354e06284f526a2df40b08d3293424b9,Namespace:kube-system,Attempt:0,}" May 9 00:39:41.772845 containerd[1580]: time="2025-05-09T00:39:41.772812233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 9 00:39:41.774235 kubelet[2329]: E0509 00:39:41.774207 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:41.774705 containerd[1580]: time="2025-05-09T00:39:41.774658345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 9 00:39:42.143150 kubelet[2329]: E0509 00:39:42.142961 2329 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="1.6s" May 9 00:39:42.150796 kubelet[2329]: W0509 00:39:42.150682 2329 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 9 00:39:42.150918 kubelet[2329]: E0509 00:39:42.150806 2329 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 9 00:39:42.248797 kubelet[2329]: I0509 00:39:42.248756 2329 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 00:39:42.249273 kubelet[2329]: E0509 00:39:42.249219 2329 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" May 9 00:39:42.258788 kubelet[2329]: W0509 00:39:42.258752 2329 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 9 00:39:42.258788 kubelet[2329]: E0509 00:39:42.258795 2329 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 9 00:39:42.294859 kubelet[2329]: W0509 00:39:42.294755 2329 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 9 00:39:42.294859 kubelet[2329]: E0509 00:39:42.294858 2329 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 9 00:39:42.801082 kubelet[2329]: E0509 00:39:42.801029 2329 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.133:6443: connect: connection refused May 9 00:39:43.282141 kubelet[2329]: W0509 00:39:43.282016 2329 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 9 00:39:43.282141 kubelet[2329]: E0509 00:39:43.282065 2329 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 9 00:39:43.743563 kubelet[2329]: E0509 00:39:43.743441 2329 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="3.2s" May 9 00:39:43.760121 kubelet[2329]: W0509 00:39:43.760070 2329 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 9 00:39:43.760121 kubelet[2329]: E0509 00:39:43.760117 2329 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 9 00:39:43.851012 kubelet[2329]: I0509 00:39:43.850957 2329 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 00:39:43.851390 kubelet[2329]: E0509 00:39:43.851355 2329 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" May 9 00:39:44.100072 kubelet[2329]: W0509 00:39:44.099914 2329 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 9 00:39:44.100072 kubelet[2329]: E0509 00:39:44.099964 2329 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 9 00:39:44.684789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3777760020.mount: Deactivated successfully. May 9 00:39:44.862963 kubelet[2329]: W0509 00:39:44.862914 2329 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 9 00:39:44.862963 kubelet[2329]: E0509 00:39:44.862964 2329 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 9 00:39:45.075952 containerd[1580]: time="2025-05-09T00:39:45.075774434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:39:45.106181 containerd[1580]: time="2025-05-09T00:39:45.106082922Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 9 00:39:45.113481 containerd[1580]: time="2025-05-09T00:39:45.113435471Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:39:45.114872 containerd[1580]: time="2025-05-09T00:39:45.114818703Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:39:45.133689 containerd[1580]: time="2025-05-09T00:39:45.133630377Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:39:45.177819 containerd[1580]: time="2025-05-09T00:39:45.177709529Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:39:45.214390 containerd[1580]: time="2025-05-09T00:39:45.214298284Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:39:45.247923 containerd[1580]: time="2025-05-09T00:39:45.247851928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:39:45.248681 containerd[1580]: time="2025-05-09T00:39:45.248620904Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.473837251s" May 9 00:39:45.249475 containerd[1580]: time="2025-05-09T00:39:45.249426803Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.476722261s" May 9 00:39:45.250146 containerd[1580]: time="2025-05-09T00:39:45.250102669Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.477239302s" May 9 00:39:45.423930 containerd[1580]: time="2025-05-09T00:39:45.422917352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:39:45.423930 containerd[1580]: time="2025-05-09T00:39:45.422965717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:39:45.423930 containerd[1580]: time="2025-05-09T00:39:45.422975417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:39:45.423930 containerd[1580]: time="2025-05-09T00:39:45.423050891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:39:45.425621 containerd[1580]: time="2025-05-09T00:39:45.425413152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:39:45.425792 containerd[1580]: time="2025-05-09T00:39:45.425511324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:39:45.425848 containerd[1580]: time="2025-05-09T00:39:45.425778921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:39:45.426925 containerd[1580]: time="2025-05-09T00:39:45.426861866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:39:45.427347 containerd[1580]: time="2025-05-09T00:39:45.427260776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:39:45.428688 containerd[1580]: time="2025-05-09T00:39:45.428642153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:39:45.428843 containerd[1580]: time="2025-05-09T00:39:45.428780191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:39:45.429110 containerd[1580]: time="2025-05-09T00:39:45.429081510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:39:45.490869 containerd[1580]: time="2025-05-09T00:39:45.490625343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:354e06284f526a2df40b08d3293424b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbb306b2c20d17de43c6846c0e0cae20978fed05d9329057b251af7589b297b6\"" May 9 00:39:45.492403 kubelet[2329]: E0509 00:39:45.492377 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:45.493223 containerd[1580]: time="2025-05-09T00:39:45.493186304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"afcc6546ff7d61e9d29dc36a18ec806f5a16c3c3aed16807b4278b1b5b2a9f50\"" May 9 00:39:45.494279 kubelet[2329]: E0509 00:39:45.494142 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:45.495938 containerd[1580]: time="2025-05-09T00:39:45.495909524Z" level=info msg="CreateContainer within sandbox \"cbb306b2c20d17de43c6846c0e0cae20978fed05d9329057b251af7589b297b6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 9 00:39:45.496341 containerd[1580]: time="2025-05-09T00:39:45.496308905Z" level=info msg="CreateContainer within sandbox \"afcc6546ff7d61e9d29dc36a18ec806f5a16c3c3aed16807b4278b1b5b2a9f50\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 9 00:39:45.499397 containerd[1580]: time="2025-05-09T00:39:45.499369593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c70a0b44311d644e9c915fdc53411c22b095599c9a56ae9101b7c9fefc3c6f0\"" May 9 00:39:45.499916 kubelet[2329]: E0509 00:39:45.499901 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:45.501366 containerd[1580]: time="2025-05-09T00:39:45.501346945Z" level=info msg="CreateContainer within sandbox \"9c70a0b44311d644e9c915fdc53411c22b095599c9a56ae9101b7c9fefc3c6f0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 9 00:39:46.259912 kubelet[2329]: E0509 00:39:46.259713 2329 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.133:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.133:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183db4f1b6a26835 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-09 00:39:40.733835317 +0000 UTC m=+0.756000119,LastTimestamp:2025-05-09 00:39:40.733835317 +0000 UTC m=+0.756000119,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 9 00:39:46.441954 containerd[1580]: time="2025-05-09T00:39:46.441891721Z" level=info msg="CreateContainer within sandbox \"afcc6546ff7d61e9d29dc36a18ec806f5a16c3c3aed16807b4278b1b5b2a9f50\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9df19804030a9c00a0192de85dfb88dd7d0e26623a40d07496966db2201d48e6\"" May 9 00:39:46.442700 containerd[1580]: time="2025-05-09T00:39:46.442648970Z" level=info msg="StartContainer for \"9df19804030a9c00a0192de85dfb88dd7d0e26623a40d07496966db2201d48e6\"" May 9 00:39:46.450480 containerd[1580]: time="2025-05-09T00:39:46.450426461Z" level=info msg="CreateContainer within sandbox \"cbb306b2c20d17de43c6846c0e0cae20978fed05d9329057b251af7589b297b6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3a542c51edb2d9e52a8b2b726f17a94d171c86fac83266ce1bf32c10b4076cb3\"" May 9 00:39:46.451809 containerd[1580]: time="2025-05-09T00:39:46.451783102Z" level=info msg="StartContainer for \"3a542c51edb2d9e52a8b2b726f17a94d171c86fac83266ce1bf32c10b4076cb3\"" May 9 00:39:46.454230 containerd[1580]: time="2025-05-09T00:39:46.454173682Z" level=info msg="CreateContainer within sandbox \"9c70a0b44311d644e9c915fdc53411c22b095599c9a56ae9101b7c9fefc3c6f0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4809c1e9e51b4fa98b6fbe92efb92505c403889cb8a6b4948918a9b3ad1ee824\"" May 9 00:39:46.454861 containerd[1580]: time="2025-05-09T00:39:46.454837121Z" level=info msg="StartContainer for \"4809c1e9e51b4fa98b6fbe92efb92505c403889cb8a6b4948918a9b3ad1ee824\"" May 9 00:39:46.550986 containerd[1580]: time="2025-05-09T00:39:46.550935543Z" level=info msg="StartContainer for \"4809c1e9e51b4fa98b6fbe92efb92505c403889cb8a6b4948918a9b3ad1ee824\" returns successfully" May 9 00:39:46.551117 containerd[1580]: time="2025-05-09T00:39:46.551092448Z" level=info msg="StartContainer for \"9df19804030a9c00a0192de85dfb88dd7d0e26623a40d07496966db2201d48e6\" returns successfully" May 9 00:39:46.554831 containerd[1580]: time="2025-05-09T00:39:46.554745570Z" level=info msg="StartContainer for \"3a542c51edb2d9e52a8b2b726f17a94d171c86fac83266ce1bf32c10b4076cb3\" returns successfully" May 9 00:39:46.778949 kubelet[2329]: E0509 00:39:46.778911 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:46.781307 kubelet[2329]: E0509 00:39:46.781275 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:46.783072 kubelet[2329]: E0509 00:39:46.783046 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:47.053530 kubelet[2329]: I0509 00:39:47.053485 2329 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 00:39:47.557522 kubelet[2329]: E0509 00:39:47.557476 2329 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 9 00:39:47.773989 kubelet[2329]: I0509 00:39:47.773928 2329 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 9 00:39:47.791369 kubelet[2329]: E0509 00:39:47.791281 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:47.791803 kubelet[2329]: E0509 00:39:47.791778 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:47.791982 kubelet[2329]: E0509 00:39:47.791953 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:48.005491 kubelet[2329]: E0509 00:39:48.005336 2329 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:39:48.106201 kubelet[2329]: E0509 00:39:48.106130 2329 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:39:48.207276 kubelet[2329]: E0509 00:39:48.207239 2329 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:39:48.307417 kubelet[2329]: E0509 00:39:48.307374 2329 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:39:48.408011 kubelet[2329]: E0509 00:39:48.407961 2329 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:39:48.508558 kubelet[2329]: E0509 00:39:48.508510 2329 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:39:48.609311 kubelet[2329]: E0509 00:39:48.609170 2329 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:39:48.709837 kubelet[2329]: E0509 00:39:48.709784 2329 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:39:48.789296 kubelet[2329]: E0509 00:39:48.789259 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:48.809926 kubelet[2329]: E0509 00:39:48.809886 2329 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:39:48.910578 kubelet[2329]: E0509 00:39:48.910462 2329 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:39:49.011019 kubelet[2329]: E0509 00:39:49.010957 2329 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:39:49.111848 kubelet[2329]: E0509 00:39:49.111775 2329 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:39:49.212648 kubelet[2329]: E0509 00:39:49.212481 2329 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:39:49.313593 kubelet[2329]: E0509 00:39:49.313534 2329 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:39:49.414093 kubelet[2329]: E0509 00:39:49.414033 2329 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:39:49.514844 kubelet[2329]: E0509 00:39:49.514656 2329 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:39:49.730787 kubelet[2329]: I0509 00:39:49.730717 2329 apiserver.go:52] "Watching apiserver" May 9 00:39:49.740653 kubelet[2329]: I0509 00:39:49.740618 2329 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 00:39:50.377840 kubelet[2329]: E0509 00:39:50.377809 2329 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:50.404417 systemd[1]: Reloading requested from client PID 2610 ('systemctl') (unit session-5.scope)... May 9 00:39:50.404434 systemd[1]: Reloading... May 9 00:39:50.471816 zram_generator::config[2650]: No configuration found. May 9 00:39:50.595625 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:39:50.679102 systemd[1]: Reloading finished in 274 ms. May 9 00:39:50.709916 kubelet[2329]: I0509 00:39:50.709844 2329 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:39:50.709935 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:39:50.729918 systemd[1]: kubelet.service: Deactivated successfully. May 9 00:39:50.730323 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:39:50.740099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:39:50.905715 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:39:50.911810 (kubelet)[2704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:39:50.957596 kubelet[2704]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:39:50.957596 kubelet[2704]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 00:39:50.957596 kubelet[2704]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:39:50.957596 kubelet[2704]: I0509 00:39:50.957548 2704 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:39:50.962151 kubelet[2704]: I0509 00:39:50.962124 2704 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 9 00:39:50.962151 kubelet[2704]: I0509 00:39:50.962142 2704 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:39:50.962319 kubelet[2704]: I0509 00:39:50.962299 2704 server.go:927] "Client rotation is on, will bootstrap in background" May 9 00:39:50.963339 kubelet[2704]: I0509 00:39:50.963317 2704 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 9 00:39:50.964335 kubelet[2704]: I0509 00:39:50.964257 2704 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:39:50.972487 kubelet[2704]: I0509 00:39:50.972447 2704 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:39:50.973001 kubelet[2704]: I0509 00:39:50.972966 2704 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:39:50.973154 kubelet[2704]: I0509 00:39:50.972993 2704 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 9 00:39:50.973249 kubelet[2704]: I0509 00:39:50.973161 2704 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:39:50.973249 kubelet[2704]: I0509 00:39:50.973170 2704 container_manager_linux.go:301] "Creating device plugin manager" May 9 00:39:50.973249 kubelet[2704]: I0509 00:39:50.973210 2704 state_mem.go:36] "Initialized new in-memory state store" May 9 00:39:50.973337 kubelet[2704]: I0509 00:39:50.973294 2704 kubelet.go:400] "Attempting to sync node with API server" May 9 00:39:50.973337 kubelet[2704]: I0509 00:39:50.973314 2704 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:39:50.973337 kubelet[2704]: I0509 00:39:50.973334 2704 kubelet.go:312] "Adding apiserver pod source" May 9 00:39:50.973431 kubelet[2704]: I0509 00:39:50.973353 2704 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:39:50.974075 kubelet[2704]: I0509 00:39:50.974017 2704 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 9 00:39:50.974247 kubelet[2704]: I0509 00:39:50.974208 2704 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:39:50.974950 kubelet[2704]: I0509 00:39:50.974920 2704 server.go:1264] "Started kubelet" May 9 00:39:50.976140 kubelet[2704]: I0509 00:39:50.976116 2704 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:39:50.982779 kubelet[2704]: I0509 00:39:50.978220 2704 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:39:50.982779 kubelet[2704]: I0509 00:39:50.979056 2704 server.go:455] "Adding debug handlers to kubelet server" May 9 00:39:50.982779 kubelet[2704]: I0509 00:39:50.980460 2704 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:39:50.982779 kubelet[2704]: I0509 00:39:50.980663 2704 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:39:50.982779 kubelet[2704]: I0509 00:39:50.981712 2704 volume_manager.go:291] "Starting Kubelet Volume Manager" May 9 00:39:50.982779 kubelet[2704]: I0509 00:39:50.981834 2704 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 00:39:50.982779 kubelet[2704]: I0509 00:39:50.981973 2704 reconciler.go:26] "Reconciler: start to sync state" May 9 00:39:50.988260 kubelet[2704]: I0509 00:39:50.988225 2704 factory.go:221] Registration of the containerd container factory successfully May 9 00:39:50.988260 kubelet[2704]: I0509 00:39:50.988246 2704 factory.go:221] Registration of the systemd container factory successfully May 9 00:39:50.988398 kubelet[2704]: I0509 00:39:50.988307 2704 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:39:50.991404 kubelet[2704]: E0509 00:39:50.990298 2704 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 00:39:50.992301 kubelet[2704]: I0509 00:39:50.991999 2704 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:39:50.996955 kubelet[2704]: I0509 00:39:50.996492 2704 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:39:50.997055 kubelet[2704]: I0509 00:39:50.997042 2704 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 00:39:50.997368 kubelet[2704]: I0509 00:39:50.997144 2704 kubelet.go:2337] "Starting kubelet main sync loop" May 9 00:39:50.997368 kubelet[2704]: E0509 00:39:50.997196 2704 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:39:51.035616 kubelet[2704]: I0509 00:39:51.035573 2704 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 00:39:51.035616 kubelet[2704]: I0509 00:39:51.035589 2704 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 00:39:51.035616 kubelet[2704]: I0509 00:39:51.035620 2704 state_mem.go:36] "Initialized new in-memory state store" May 9 00:39:51.035920 kubelet[2704]: I0509 00:39:51.035791 2704 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 9 00:39:51.035920 kubelet[2704]: I0509 00:39:51.035801 2704 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 9 00:39:51.035920 kubelet[2704]: I0509 00:39:51.035820 2704 policy_none.go:49] "None policy: Start" May 9 00:39:51.036423 kubelet[2704]: I0509 00:39:51.036403 2704 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 00:39:51.036423 kubelet[2704]: I0509 00:39:51.036425 2704 state_mem.go:35] "Initializing new in-memory state store" May 9 00:39:51.036554 kubelet[2704]: I0509 00:39:51.036538 2704 state_mem.go:75] "Updated machine memory state" May 9 00:39:51.038050 kubelet[2704]: I0509 00:39:51.038032 2704 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:39:51.038459 kubelet[2704]: I0509 00:39:51.038206 2704 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:39:51.038459 kubelet[2704]: I0509 00:39:51.038291 2704 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:39:51.086268 kubelet[2704]: I0509 00:39:51.086233 2704 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 00:39:51.090816 kubelet[2704]: I0509 00:39:51.090785 2704 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 9 00:39:51.090961 kubelet[2704]: I0509 00:39:51.090860 2704 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 9 00:39:51.097662 kubelet[2704]: I0509 00:39:51.097622 2704 topology_manager.go:215] "Topology Admit Handler" podUID="354e06284f526a2df40b08d3293424b9" podNamespace="kube-system" podName="kube-apiserver-localhost" May 9 00:39:51.097861 kubelet[2704]: I0509 00:39:51.097696 2704 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 9 00:39:51.097861 kubelet[2704]: I0509 00:39:51.097785 2704 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 9 00:39:51.103911 kubelet[2704]: E0509 00:39:51.103872 2704 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 9 00:39:51.182368 kubelet[2704]: I0509 00:39:51.182275 2704 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/354e06284f526a2df40b08d3293424b9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"354e06284f526a2df40b08d3293424b9\") " pod="kube-system/kube-apiserver-localhost" May 9 00:39:51.182368 kubelet[2704]: I0509 00:39:51.182321 2704 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/354e06284f526a2df40b08d3293424b9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"354e06284f526a2df40b08d3293424b9\") " pod="kube-system/kube-apiserver-localhost" May 9 00:39:51.182368 kubelet[2704]: I0509 00:39:51.182358 2704 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/354e06284f526a2df40b08d3293424b9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"354e06284f526a2df40b08d3293424b9\") " pod="kube-system/kube-apiserver-localhost" May 9 00:39:51.182368 kubelet[2704]: I0509 00:39:51.182380 2704 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:39:51.182597 kubelet[2704]: I0509 00:39:51.182400 2704 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:39:51.182597 kubelet[2704]: I0509 00:39:51.182422 2704 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:39:51.182597 kubelet[2704]: I0509 00:39:51.182440 2704 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:39:51.182597 kubelet[2704]: I0509 00:39:51.182459 2704 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:39:51.182597 kubelet[2704]: I0509 00:39:51.182476 2704 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 9 00:39:51.403725 kubelet[2704]: E0509 00:39:51.403631 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:51.403886 kubelet[2704]: E0509 00:39:51.403829 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:51.404382 kubelet[2704]: E0509 00:39:51.404365 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:51.974568 kubelet[2704]: I0509 00:39:51.974511 2704 apiserver.go:52] "Watching apiserver" May 9 00:39:51.982854 kubelet[2704]: I0509 00:39:51.982826 2704 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 00:39:52.007683 kubelet[2704]: E0509 00:39:52.007016 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:52.007683 kubelet[2704]: E0509 00:39:52.007019 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:52.225812 kubelet[2704]: I0509 00:39:52.225660 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.225636023 podStartE2EDuration="1.225636023s" podCreationTimestamp="2025-05-09 00:39:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:39:52.193805944 +0000 UTC m=+1.277304041" watchObservedRunningTime="2025-05-09 00:39:52.225636023 +0000 UTC m=+1.309134109" May 9 00:39:52.261616 kubelet[2704]: E0509 00:39:52.258001 2704 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 9 00:39:52.265848 kubelet[2704]: E0509 00:39:52.265816 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:52.305628 kubelet[2704]: I0509 00:39:52.305562 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.305520654 podStartE2EDuration="1.305520654s" podCreationTimestamp="2025-05-09 00:39:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:39:52.304247408 +0000 UTC m=+1.387745504" watchObservedRunningTime="2025-05-09 00:39:52.305520654 +0000 UTC m=+1.389018750" May 9 00:39:52.344345 kubelet[2704]: I0509 00:39:52.344274 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.344254482 podStartE2EDuration="2.344254482s" podCreationTimestamp="2025-05-09 00:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:39:52.344002865 +0000 UTC m=+1.427501031" watchObservedRunningTime="2025-05-09 00:39:52.344254482 +0000 UTC m=+1.427752578" May 9 00:39:52.789564 sudo[1727]: pam_unix(sudo:session): session closed for user root May 9 00:39:52.791660 sshd[1720]: pam_unix(sshd:session): session closed for user core May 9 00:39:52.795363 systemd[1]: sshd@4-10.0.0.133:22-10.0.0.1:49980.service: Deactivated successfully. May 9 00:39:52.797260 systemd-logind[1554]: Session 5 logged out. Waiting for processes to exit. May 9 00:39:52.797425 systemd[1]: session-5.scope: Deactivated successfully. May 9 00:39:52.798519 systemd-logind[1554]: Removed session 5. May 9 00:39:53.008263 kubelet[2704]: E0509 00:39:53.008231 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:53.008263 kubelet[2704]: E0509 00:39:53.008252 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:53.511723 kubelet[2704]: E0509 00:39:53.511669 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:39:57.862991 update_engine[1560]: I20250509 00:39:57.862915 1560 update_attempter.cc:509] Updating boot flags... May 9 00:39:57.887759 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2775) May 9 00:39:57.920780 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2776) May 9 00:39:57.955608 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2776) May 9 00:39:59.607364 kubelet[2704]: E0509 00:39:59.607322 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:00.018086 kubelet[2704]: E0509 00:40:00.017954 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:02.110961 kubelet[2704]: E0509 00:40:02.109376 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:03.523746 kubelet[2704]: E0509 00:40:03.523666 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:03.939242 kubelet[2704]: I0509 00:40:03.939180 2704 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 9 00:40:03.939860 containerd[1580]: time="2025-05-09T00:40:03.939803193Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 00:40:03.940373 kubelet[2704]: I0509 00:40:03.940160 2704 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 9 00:40:04.615317 kubelet[2704]: I0509 00:40:04.611604 2704 topology_manager.go:215] "Topology Admit Handler" podUID="b1d4b1b2-29ea-4f94-8d7d-399284feb2e0" podNamespace="kube-system" podName="kube-proxy-cx7gd" May 9 00:40:04.675147 kubelet[2704]: I0509 00:40:04.671800 2704 topology_manager.go:215] "Topology Admit Handler" podUID="cddad015-aee1-4760-b80a-2ce53e5fb06f" podNamespace="kube-flannel" podName="kube-flannel-ds-q5p7n" May 9 00:40:04.720877 kubelet[2704]: I0509 00:40:04.719918 2704 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lsxc\" (UniqueName: \"kubernetes.io/projected/b1d4b1b2-29ea-4f94-8d7d-399284feb2e0-kube-api-access-7lsxc\") pod \"kube-proxy-cx7gd\" (UID: \"b1d4b1b2-29ea-4f94-8d7d-399284feb2e0\") " pod="kube-system/kube-proxy-cx7gd" May 9 00:40:04.720877 kubelet[2704]: I0509 00:40:04.720030 2704 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b1d4b1b2-29ea-4f94-8d7d-399284feb2e0-kube-proxy\") pod \"kube-proxy-cx7gd\" (UID: \"b1d4b1b2-29ea-4f94-8d7d-399284feb2e0\") " pod="kube-system/kube-proxy-cx7gd" May 9 00:40:04.720877 kubelet[2704]: I0509 00:40:04.720155 2704 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1d4b1b2-29ea-4f94-8d7d-399284feb2e0-xtables-lock\") pod \"kube-proxy-cx7gd\" (UID: \"b1d4b1b2-29ea-4f94-8d7d-399284feb2e0\") " pod="kube-system/kube-proxy-cx7gd" May 9 00:40:04.720877 kubelet[2704]: I0509 00:40:04.720251 2704 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1d4b1b2-29ea-4f94-8d7d-399284feb2e0-lib-modules\") pod \"kube-proxy-cx7gd\" (UID: \"b1d4b1b2-29ea-4f94-8d7d-399284feb2e0\") " pod="kube-system/kube-proxy-cx7gd" May 9 00:40:04.821809 kubelet[2704]: I0509 00:40:04.821421 2704 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/cddad015-aee1-4760-b80a-2ce53e5fb06f-cni\") pod \"kube-flannel-ds-q5p7n\" (UID: \"cddad015-aee1-4760-b80a-2ce53e5fb06f\") " pod="kube-flannel/kube-flannel-ds-q5p7n" May 9 00:40:04.823646 kubelet[2704]: I0509 00:40:04.822430 2704 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/cddad015-aee1-4760-b80a-2ce53e5fb06f-cni-plugin\") pod \"kube-flannel-ds-q5p7n\" (UID: \"cddad015-aee1-4760-b80a-2ce53e5fb06f\") " pod="kube-flannel/kube-flannel-ds-q5p7n" May 9 00:40:04.823646 kubelet[2704]: I0509 00:40:04.822468 2704 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/cddad015-aee1-4760-b80a-2ce53e5fb06f-flannel-cfg\") pod \"kube-flannel-ds-q5p7n\" (UID: \"cddad015-aee1-4760-b80a-2ce53e5fb06f\") " pod="kube-flannel/kube-flannel-ds-q5p7n" May 9 00:40:04.823646 kubelet[2704]: I0509 00:40:04.822492 2704 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cddad015-aee1-4760-b80a-2ce53e5fb06f-xtables-lock\") pod \"kube-flannel-ds-q5p7n\" (UID: \"cddad015-aee1-4760-b80a-2ce53e5fb06f\") " pod="kube-flannel/kube-flannel-ds-q5p7n" May 9 00:40:04.823646 kubelet[2704]: I0509 00:40:04.822518 2704 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dl26\" (UniqueName: \"kubernetes.io/projected/cddad015-aee1-4760-b80a-2ce53e5fb06f-kube-api-access-6dl26\") pod \"kube-flannel-ds-q5p7n\" (UID: \"cddad015-aee1-4760-b80a-2ce53e5fb06f\") " pod="kube-flannel/kube-flannel-ds-q5p7n" May 9 00:40:04.823646 kubelet[2704]: I0509 00:40:04.822557 2704 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cddad015-aee1-4760-b80a-2ce53e5fb06f-run\") pod \"kube-flannel-ds-q5p7n\" (UID: \"cddad015-aee1-4760-b80a-2ce53e5fb06f\") " pod="kube-flannel/kube-flannel-ds-q5p7n" May 9 00:40:04.921000 kubelet[2704]: E0509 00:40:04.919839 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:04.930893 containerd[1580]: time="2025-05-09T00:40:04.930778425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cx7gd,Uid:b1d4b1b2-29ea-4f94-8d7d-399284feb2e0,Namespace:kube-system,Attempt:0,}" May 9 00:40:04.987431 kubelet[2704]: E0509 00:40:04.985530 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:04.987583 containerd[1580]: time="2025-05-09T00:40:04.986286854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-q5p7n,Uid:cddad015-aee1-4760-b80a-2ce53e5fb06f,Namespace:kube-flannel,Attempt:0,}" May 9 00:40:05.064659 containerd[1580]: time="2025-05-09T00:40:05.064316031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:40:05.064659 containerd[1580]: time="2025-05-09T00:40:05.064399103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:40:05.064659 containerd[1580]: time="2025-05-09T00:40:05.064421547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:40:05.065037 containerd[1580]: time="2025-05-09T00:40:05.064550279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:40:05.092541 containerd[1580]: time="2025-05-09T00:40:05.090259121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:40:05.092541 containerd[1580]: time="2025-05-09T00:40:05.090346011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:40:05.092541 containerd[1580]: time="2025-05-09T00:40:05.090370469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:40:05.092541 containerd[1580]: time="2025-05-09T00:40:05.090532164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:40:05.156833 containerd[1580]: time="2025-05-09T00:40:05.156656539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cx7gd,Uid:b1d4b1b2-29ea-4f94-8d7d-399284feb2e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf4f24ca33470d5f581adb73663f5243c28f01a008ebf756023c0acdfea03cf2\"" May 9 00:40:05.158672 kubelet[2704]: E0509 00:40:05.158086 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:05.167046 containerd[1580]: time="2025-05-09T00:40:05.166997442Z" level=info msg="CreateContainer within sandbox \"bf4f24ca33470d5f581adb73663f5243c28f01a008ebf756023c0acdfea03cf2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 00:40:05.209314 containerd[1580]: time="2025-05-09T00:40:05.207555012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-q5p7n,Uid:cddad015-aee1-4760-b80a-2ce53e5fb06f,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"70bcbe94a95a17b0c52eb8800e94d19fe545ca30778e5173dea3dd4c94993abf\"" May 9 00:40:05.209433 kubelet[2704]: E0509 00:40:05.208711 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:05.220271 containerd[1580]: time="2025-05-09T00:40:05.220132151Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 9 00:40:05.225947 containerd[1580]: time="2025-05-09T00:40:05.225873243Z" level=info msg="CreateContainer within sandbox \"bf4f24ca33470d5f581adb73663f5243c28f01a008ebf756023c0acdfea03cf2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"45bb8373fba540a681c88a65ef1629ffeaf86d8dd6e736aad8b7074683eb5802\"" May 9 00:40:05.226769 containerd[1580]: time="2025-05-09T00:40:05.226698724Z" level=info msg="StartContainer for \"45bb8373fba540a681c88a65ef1629ffeaf86d8dd6e736aad8b7074683eb5802\"" May 9 00:40:05.336778 containerd[1580]: time="2025-05-09T00:40:05.336693136Z" level=info msg="StartContainer for \"45bb8373fba540a681c88a65ef1629ffeaf86d8dd6e736aad8b7074683eb5802\" returns successfully" May 9 00:40:06.080711 kubelet[2704]: E0509 00:40:06.080440 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:07.433586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount581881225.mount: Deactivated successfully. May 9 00:40:07.543983 containerd[1580]: time="2025-05-09T00:40:07.543831823Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:40:07.545260 containerd[1580]: time="2025-05-09T00:40:07.545133163Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" May 9 00:40:07.554683 containerd[1580]: time="2025-05-09T00:40:07.550984984Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:40:07.560172 containerd[1580]: time="2025-05-09T00:40:07.560078186Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:40:07.563367 containerd[1580]: time="2025-05-09T00:40:07.561436958Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.34125546s" May 9 00:40:07.563367 containerd[1580]: time="2025-05-09T00:40:07.561493186Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" May 9 00:40:07.581527 containerd[1580]: time="2025-05-09T00:40:07.580071583Z" level=info msg="CreateContainer within sandbox \"70bcbe94a95a17b0c52eb8800e94d19fe545ca30778e5173dea3dd4c94993abf\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 9 00:40:07.648487 containerd[1580]: time="2025-05-09T00:40:07.644868199Z" level=info msg="CreateContainer within sandbox \"70bcbe94a95a17b0c52eb8800e94d19fe545ca30778e5173dea3dd4c94993abf\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"373e11c576a33ae1d1fab602cae533e9f1fea6692ebf15af363a58a5c359286c\"" May 9 00:40:07.648487 containerd[1580]: time="2025-05-09T00:40:07.647001005Z" level=info msg="StartContainer for \"373e11c576a33ae1d1fab602cae533e9f1fea6692ebf15af363a58a5c359286c\"" May 9 00:40:07.787907 containerd[1580]: time="2025-05-09T00:40:07.786886992Z" level=info msg="StartContainer for \"373e11c576a33ae1d1fab602cae533e9f1fea6692ebf15af363a58a5c359286c\" returns successfully" May 9 00:40:07.934292 containerd[1580]: time="2025-05-09T00:40:07.930964786Z" level=info msg="shim disconnected" id=373e11c576a33ae1d1fab602cae533e9f1fea6692ebf15af363a58a5c359286c namespace=k8s.io May 9 00:40:07.934292 containerd[1580]: time="2025-05-09T00:40:07.933169411Z" level=warning msg="cleaning up after shim disconnected" id=373e11c576a33ae1d1fab602cae533e9f1fea6692ebf15af363a58a5c359286c namespace=k8s.io May 9 00:40:07.934292 containerd[1580]: time="2025-05-09T00:40:07.933187847Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:40:08.124904 kubelet[2704]: E0509 00:40:08.118698 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:08.128299 containerd[1580]: time="2025-05-09T00:40:08.126700573Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 9 00:40:08.155878 kubelet[2704]: I0509 00:40:08.155225 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cx7gd" podStartSLOduration=4.155192495 podStartE2EDuration="4.155192495s" podCreationTimestamp="2025-05-09 00:40:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:40:06.114877961 +0000 UTC m=+15.198376087" watchObservedRunningTime="2025-05-09 00:40:08.155192495 +0000 UTC m=+17.238690591" May 9 00:40:10.633191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3400643917.mount: Deactivated successfully. May 9 00:40:12.356370 containerd[1580]: time="2025-05-09T00:40:12.354695133Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:40:12.358806 containerd[1580]: time="2025-05-09T00:40:12.357886576Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" May 9 00:40:12.360626 containerd[1580]: time="2025-05-09T00:40:12.360486930Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:40:12.368472 containerd[1580]: time="2025-05-09T00:40:12.365552190Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:40:12.372448 containerd[1580]: time="2025-05-09T00:40:12.369303179Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 4.242486322s" May 9 00:40:12.372448 containerd[1580]: time="2025-05-09T00:40:12.369412260Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" May 9 00:40:12.372448 containerd[1580]: time="2025-05-09T00:40:12.372439086Z" level=info msg="CreateContainer within sandbox \"70bcbe94a95a17b0c52eb8800e94d19fe545ca30778e5173dea3dd4c94993abf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 9 00:40:12.427960 containerd[1580]: time="2025-05-09T00:40:12.427847764Z" level=info msg="CreateContainer within sandbox \"70bcbe94a95a17b0c52eb8800e94d19fe545ca30778e5173dea3dd4c94993abf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2404f8cca0263a047b7635b6f05c7c9015d9ec1133dbe27abb6868106bfbdd2d\"" May 9 00:40:12.432408 containerd[1580]: time="2025-05-09T00:40:12.428779386Z" level=info msg="StartContainer for \"2404f8cca0263a047b7635b6f05c7c9015d9ec1133dbe27abb6868106bfbdd2d\"" May 9 00:40:12.633778 containerd[1580]: time="2025-05-09T00:40:12.633379391Z" level=info msg="StartContainer for \"2404f8cca0263a047b7635b6f05c7c9015d9ec1133dbe27abb6868106bfbdd2d\" returns successfully" May 9 00:40:12.650140 kubelet[2704]: I0509 00:40:12.648648 2704 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 9 00:40:12.674823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2404f8cca0263a047b7635b6f05c7c9015d9ec1133dbe27abb6868106bfbdd2d-rootfs.mount: Deactivated successfully. May 9 00:40:12.735207 kubelet[2704]: I0509 00:40:12.734826 2704 topology_manager.go:215] "Topology Admit Handler" podUID="94ae4d1f-5778-40e6-ad4b-773b6635d0f9" podNamespace="kube-system" podName="coredns-7db6d8ff4d-s66x7" May 9 00:40:12.759890 kubelet[2704]: I0509 00:40:12.756652 2704 topology_manager.go:215] "Topology Admit Handler" podUID="470f7732-f8da-4e9e-a593-ec7a7acca53d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-n7wx4" May 9 00:40:12.817048 kubelet[2704]: I0509 00:40:12.816804 2704 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94ae4d1f-5778-40e6-ad4b-773b6635d0f9-config-volume\") pod \"coredns-7db6d8ff4d-s66x7\" (UID: \"94ae4d1f-5778-40e6-ad4b-773b6635d0f9\") " pod="kube-system/coredns-7db6d8ff4d-s66x7" May 9 00:40:12.817263 kubelet[2704]: I0509 00:40:12.817140 2704 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq64t\" (UniqueName: \"kubernetes.io/projected/470f7732-f8da-4e9e-a593-ec7a7acca53d-kube-api-access-mq64t\") pod \"coredns-7db6d8ff4d-n7wx4\" (UID: \"470f7732-f8da-4e9e-a593-ec7a7acca53d\") " pod="kube-system/coredns-7db6d8ff4d-n7wx4" May 9 00:40:12.817263 kubelet[2704]: I0509 00:40:12.817177 2704 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/470f7732-f8da-4e9e-a593-ec7a7acca53d-config-volume\") pod \"coredns-7db6d8ff4d-n7wx4\" (UID: \"470f7732-f8da-4e9e-a593-ec7a7acca53d\") " pod="kube-system/coredns-7db6d8ff4d-n7wx4" May 9 00:40:12.817263 kubelet[2704]: I0509 00:40:12.817204 2704 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvc89\" (UniqueName: \"kubernetes.io/projected/94ae4d1f-5778-40e6-ad4b-773b6635d0f9-kube-api-access-pvc89\") pod \"coredns-7db6d8ff4d-s66x7\" (UID: \"94ae4d1f-5778-40e6-ad4b-773b6635d0f9\") " pod="kube-system/coredns-7db6d8ff4d-s66x7" May 9 00:40:13.111711 containerd[1580]: time="2025-05-09T00:40:13.110235714Z" level=info msg="shim disconnected" id=2404f8cca0263a047b7635b6f05c7c9015d9ec1133dbe27abb6868106bfbdd2d namespace=k8s.io May 9 00:40:13.111711 containerd[1580]: time="2025-05-09T00:40:13.110331017Z" level=warning msg="cleaning up after shim disconnected" id=2404f8cca0263a047b7635b6f05c7c9015d9ec1133dbe27abb6868106bfbdd2d namespace=k8s.io May 9 00:40:13.111711 containerd[1580]: time="2025-05-09T00:40:13.110344053Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:40:13.148443 containerd[1580]: time="2025-05-09T00:40:13.148313437Z" level=warning msg="cleanup warnings time=\"2025-05-09T00:40:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 9 00:40:13.148891 kubelet[2704]: E0509 00:40:13.148844 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:13.399034 kubelet[2704]: E0509 00:40:13.398616 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:13.405490 kubelet[2704]: E0509 00:40:13.400651 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:13.406433 containerd[1580]: time="2025-05-09T00:40:13.405983759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n7wx4,Uid:470f7732-f8da-4e9e-a593-ec7a7acca53d,Namespace:kube-system,Attempt:0,}" May 9 00:40:13.419582 containerd[1580]: time="2025-05-09T00:40:13.408040381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s66x7,Uid:94ae4d1f-5778-40e6-ad4b-773b6635d0f9,Namespace:kube-system,Attempt:0,}" May 9 00:40:13.617047 systemd[1]: run-netns-cni\x2d213ad024\x2d8c45\x2d7f1f\x2d70fc\x2d2957498c5dfe.mount: Deactivated successfully. May 9 00:40:13.628013 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c60c43a49968f6b541823720e35f880e19654bebdd42a204658d827352482332-shm.mount: Deactivated successfully. May 9 00:40:13.669972 containerd[1580]: time="2025-05-09T00:40:13.667826538Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s66x7,Uid:94ae4d1f-5778-40e6-ad4b-773b6635d0f9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fd0000ec1db656388e7b51b0221db544a923d12cb6b756a5c89cd8d017cf4599\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 9 00:40:13.670145 kubelet[2704]: E0509 00:40:13.668194 2704 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd0000ec1db656388e7b51b0221db544a923d12cb6b756a5c89cd8d017cf4599\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 9 00:40:13.670145 kubelet[2704]: E0509 00:40:13.668304 2704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd0000ec1db656388e7b51b0221db544a923d12cb6b756a5c89cd8d017cf4599\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-s66x7" May 9 00:40:13.670145 kubelet[2704]: E0509 00:40:13.668334 2704 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd0000ec1db656388e7b51b0221db544a923d12cb6b756a5c89cd8d017cf4599\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-s66x7" May 9 00:40:13.670145 kubelet[2704]: E0509 00:40:13.668397 2704 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-s66x7_kube-system(94ae4d1f-5778-40e6-ad4b-773b6635d0f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-s66x7_kube-system(94ae4d1f-5778-40e6-ad4b-773b6635d0f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd0000ec1db656388e7b51b0221db544a923d12cb6b756a5c89cd8d017cf4599\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-s66x7" podUID="94ae4d1f-5778-40e6-ad4b-773b6635d0f9" May 9 00:40:13.675929 containerd[1580]: time="2025-05-09T00:40:13.675808207Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n7wx4,Uid:470f7732-f8da-4e9e-a593-ec7a7acca53d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c60c43a49968f6b541823720e35f880e19654bebdd42a204658d827352482332\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 9 00:40:13.676805 kubelet[2704]: E0509 00:40:13.676190 2704 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c60c43a49968f6b541823720e35f880e19654bebdd42a204658d827352482332\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 9 00:40:13.676805 kubelet[2704]: E0509 00:40:13.676276 2704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c60c43a49968f6b541823720e35f880e19654bebdd42a204658d827352482332\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-n7wx4" May 9 00:40:13.676805 kubelet[2704]: E0509 00:40:13.676300 2704 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c60c43a49968f6b541823720e35f880e19654bebdd42a204658d827352482332\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-n7wx4" May 9 00:40:13.676805 kubelet[2704]: E0509 00:40:13.676359 2704 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-n7wx4_kube-system(470f7732-f8da-4e9e-a593-ec7a7acca53d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-n7wx4_kube-system(470f7732-f8da-4e9e-a593-ec7a7acca53d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c60c43a49968f6b541823720e35f880e19654bebdd42a204658d827352482332\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-n7wx4" podUID="470f7732-f8da-4e9e-a593-ec7a7acca53d" May 9 00:40:14.156495 kubelet[2704]: E0509 00:40:14.154475 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:14.168178 containerd[1580]: time="2025-05-09T00:40:14.164985379Z" level=info msg="CreateContainer within sandbox \"70bcbe94a95a17b0c52eb8800e94d19fe545ca30778e5173dea3dd4c94993abf\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 9 00:40:14.197686 containerd[1580]: time="2025-05-09T00:40:14.197598988Z" level=info msg="CreateContainer within sandbox \"70bcbe94a95a17b0c52eb8800e94d19fe545ca30778e5173dea3dd4c94993abf\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"b19b8657a5d2db70d1a2982faebb261904eccc1161c7adae28cc60ea809b5264\"" May 9 00:40:14.200445 containerd[1580]: time="2025-05-09T00:40:14.198501921Z" level=info msg="StartContainer for \"b19b8657a5d2db70d1a2982faebb261904eccc1161c7adae28cc60ea809b5264\"" May 9 00:40:14.333499 containerd[1580]: time="2025-05-09T00:40:14.332958167Z" level=info msg="StartContainer for \"b19b8657a5d2db70d1a2982faebb261904eccc1161c7adae28cc60ea809b5264\" returns successfully" May 9 00:40:14.421135 systemd[1]: run-netns-cni\x2d63e73801\x2de825\x2dba17\x2d47e0\x2d6c98c7c0a77a.mount: Deactivated successfully. May 9 00:40:14.421383 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd0000ec1db656388e7b51b0221db544a923d12cb6b756a5c89cd8d017cf4599-shm.mount: Deactivated successfully. May 9 00:40:15.166974 kubelet[2704]: E0509 00:40:15.164877 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:15.243568 kubelet[2704]: I0509 00:40:15.243207 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-q5p7n" podStartSLOduration=4.09070497 podStartE2EDuration="11.243181445s" podCreationTimestamp="2025-05-09 00:40:04 +0000 UTC" firstStartedPulling="2025-05-09 00:40:05.217814266 +0000 UTC m=+14.301312362" lastFinishedPulling="2025-05-09 00:40:12.370290741 +0000 UTC m=+21.453788837" observedRunningTime="2025-05-09 00:40:15.24276697 +0000 UTC m=+24.326265096" watchObservedRunningTime="2025-05-09 00:40:15.243181445 +0000 UTC m=+24.326679551" May 9 00:40:15.500973 systemd-networkd[1249]: flannel.1: Link UP May 9 00:40:15.500984 systemd-networkd[1249]: flannel.1: Gained carrier May 9 00:40:16.169302 kubelet[2704]: E0509 00:40:16.168853 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:16.853787 systemd-networkd[1249]: flannel.1: Gained IPv6LL May 9 00:40:26.007963 kubelet[2704]: E0509 00:40:26.004362 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:26.011116 containerd[1580]: time="2025-05-09T00:40:26.005982044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n7wx4,Uid:470f7732-f8da-4e9e-a593-ec7a7acca53d,Namespace:kube-system,Attempt:0,}" May 9 00:40:26.100004 systemd-networkd[1249]: cni0: Link UP May 9 00:40:26.100019 systemd-networkd[1249]: cni0: Gained carrier May 9 00:40:26.109280 systemd-networkd[1249]: cni0: Lost carrier May 9 00:40:26.165453 kernel: cni0: port 1(veth31c1f8c7) entered blocking state May 9 00:40:26.165636 kernel: cni0: port 1(veth31c1f8c7) entered disabled state May 9 00:40:26.165725 kernel: veth31c1f8c7: entered allmulticast mode May 9 00:40:26.165922 kernel: veth31c1f8c7: entered promiscuous mode May 9 00:40:26.180066 kernel: cni0: port 1(veth31c1f8c7) entered blocking state May 9 00:40:26.180117 kernel: cni0: port 1(veth31c1f8c7) entered forwarding state May 9 00:40:26.180258 kernel: cni0: port 1(veth31c1f8c7) entered disabled state May 9 00:40:26.165178 systemd-networkd[1249]: veth31c1f8c7: Link UP May 9 00:40:26.197134 kernel: cni0: port 1(veth31c1f8c7) entered blocking state May 9 00:40:26.197266 kernel: cni0: port 1(veth31c1f8c7) entered forwarding state May 9 00:40:26.197956 systemd-networkd[1249]: veth31c1f8c7: Gained carrier May 9 00:40:26.198559 systemd-networkd[1249]: cni0: Gained carrier May 9 00:40:26.208409 containerd[1580]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001a938), "name":"cbr0", "type":"bridge"} May 9 00:40:26.208409 containerd[1580]: delegateAdd: netconf sent to delegate plugin: May 9 00:40:26.312569 containerd[1580]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-09T00:40:26.298383533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:40:26.312569 containerd[1580]: time="2025-05-09T00:40:26.298492203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:40:26.312569 containerd[1580]: time="2025-05-09T00:40:26.298520257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:40:26.329011 containerd[1580]: time="2025-05-09T00:40:26.324624989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:40:26.410298 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:40:26.459873 containerd[1580]: time="2025-05-09T00:40:26.459617121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n7wx4,Uid:470f7732-f8da-4e9e-a593-ec7a7acca53d,Namespace:kube-system,Attempt:0,} returns sandbox id \"33a0fae112315f38c453e320801de6de002b7a28734a35e608fcd04355d2b1a2\"" May 9 00:40:26.463809 kubelet[2704]: E0509 00:40:26.461047 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:26.475322 containerd[1580]: time="2025-05-09T00:40:26.475232280Z" level=info msg="CreateContainer within sandbox \"33a0fae112315f38c453e320801de6de002b7a28734a35e608fcd04355d2b1a2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 00:40:26.527409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1655526787.mount: Deactivated successfully. May 9 00:40:26.538364 containerd[1580]: time="2025-05-09T00:40:26.538265752Z" level=info msg="CreateContainer within sandbox \"33a0fae112315f38c453e320801de6de002b7a28734a35e608fcd04355d2b1a2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"519bdc6b60e4dd98f96dc952bc0f75aeb23ab1b7ee36efb2db6eb756d332a78e\"" May 9 00:40:26.542343 containerd[1580]: time="2025-05-09T00:40:26.539245786Z" level=info msg="StartContainer for \"519bdc6b60e4dd98f96dc952bc0f75aeb23ab1b7ee36efb2db6eb756d332a78e\"" May 9 00:40:26.687381 containerd[1580]: time="2025-05-09T00:40:26.687181324Z" level=info msg="StartContainer for \"519bdc6b60e4dd98f96dc952bc0f75aeb23ab1b7ee36efb2db6eb756d332a78e\" returns successfully" May 9 00:40:27.257579 kubelet[2704]: E0509 00:40:27.257492 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:27.368127 kubelet[2704]: I0509 00:40:27.368060 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-n7wx4" podStartSLOduration=22.368034975 podStartE2EDuration="22.368034975s" podCreationTimestamp="2025-05-09 00:40:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:40:27.338138681 +0000 UTC m=+36.421636787" watchObservedRunningTime="2025-05-09 00:40:27.368034975 +0000 UTC m=+36.451533071" May 9 00:40:27.411002 systemd-networkd[1249]: cni0: Gained IPv6LL May 9 00:40:28.241947 systemd-networkd[1249]: veth31c1f8c7: Gained IPv6LL May 9 00:40:28.252941 kubelet[2704]: E0509 00:40:28.252879 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:28.998375 kubelet[2704]: E0509 00:40:28.998322 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:28.999811 containerd[1580]: time="2025-05-09T00:40:28.999128556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s66x7,Uid:94ae4d1f-5778-40e6-ad4b-773b6635d0f9,Namespace:kube-system,Attempt:0,}" May 9 00:40:29.028135 systemd-networkd[1249]: veth826eed65: Link UP May 9 00:40:29.030204 kernel: cni0: port 2(veth826eed65) entered blocking state May 9 00:40:29.030333 kernel: cni0: port 2(veth826eed65) entered disabled state May 9 00:40:29.030362 kernel: veth826eed65: entered allmulticast mode May 9 00:40:29.031779 kernel: veth826eed65: entered promiscuous mode May 9 00:40:29.038965 kernel: cni0: port 2(veth826eed65) entered blocking state May 9 00:40:29.039049 kernel: cni0: port 2(veth826eed65) entered forwarding state May 9 00:40:29.039204 systemd-networkd[1249]: veth826eed65: Gained carrier May 9 00:40:29.088430 containerd[1580]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000018938), "name":"cbr0", "type":"bridge"} May 9 00:40:29.088430 containerd[1580]: delegateAdd: netconf sent to delegate plugin: May 9 00:40:29.109819 containerd[1580]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-09T00:40:29.109578179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:40:29.109819 containerd[1580]: time="2025-05-09T00:40:29.109645759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:40:29.109819 containerd[1580]: time="2025-05-09T00:40:29.109657702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:40:29.109819 containerd[1580]: time="2025-05-09T00:40:29.109774928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:40:29.141875 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:40:29.169866 containerd[1580]: time="2025-05-09T00:40:29.169807524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s66x7,Uid:94ae4d1f-5778-40e6-ad4b-773b6635d0f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"75ec7b24d4bdb9adadd49eab98a6ccdfb699c4060f15b076b3a21577e2b6e34f\"" May 9 00:40:29.176443 kubelet[2704]: E0509 00:40:29.176408 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:29.199703 containerd[1580]: time="2025-05-09T00:40:29.199652963Z" level=info msg="CreateContainer within sandbox \"75ec7b24d4bdb9adadd49eab98a6ccdfb699c4060f15b076b3a21577e2b6e34f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 00:40:29.215678 containerd[1580]: time="2025-05-09T00:40:29.215626072Z" level=info msg="CreateContainer within sandbox \"75ec7b24d4bdb9adadd49eab98a6ccdfb699c4060f15b076b3a21577e2b6e34f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1f5335d885bd9ccd6965ee767263558b154543f6018c94182626c02fae8cb7ed\"" May 9 00:40:29.218381 containerd[1580]: time="2025-05-09T00:40:29.218343101Z" level=info msg="StartContainer for \"1f5335d885bd9ccd6965ee767263558b154543f6018c94182626c02fae8cb7ed\"" May 9 00:40:29.286295 containerd[1580]: time="2025-05-09T00:40:29.286231114Z" level=info msg="StartContainer for \"1f5335d885bd9ccd6965ee767263558b154543f6018c94182626c02fae8cb7ed\" returns successfully" May 9 00:40:30.014745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount779566380.mount: Deactivated successfully. May 9 00:40:30.265621 kubelet[2704]: E0509 00:40:30.265502 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:30.590718 kubelet[2704]: I0509 00:40:30.590344 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-s66x7" podStartSLOduration=25.590322477 podStartE2EDuration="25.590322477s" podCreationTimestamp="2025-05-09 00:40:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:40:30.590201934 +0000 UTC m=+39.673700030" watchObservedRunningTime="2025-05-09 00:40:30.590322477 +0000 UTC m=+39.673820573" May 9 00:40:30.608986 systemd[1]: Started sshd@5-10.0.0.133:22-10.0.0.1:48530.service - OpenSSH per-connection server daemon (10.0.0.1:48530). May 9 00:40:30.693797 sshd[3621]: Accepted publickey for core from 10.0.0.1 port 48530 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:40:30.695430 sshd[3621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:40:30.699560 systemd-logind[1554]: New session 6 of user core. May 9 00:40:30.711089 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 00:40:30.738025 systemd-networkd[1249]: veth826eed65: Gained IPv6LL May 9 00:40:30.883655 sshd[3621]: pam_unix(sshd:session): session closed for user core May 9 00:40:30.887863 systemd[1]: sshd@5-10.0.0.133:22-10.0.0.1:48530.service: Deactivated successfully. May 9 00:40:30.890386 systemd-logind[1554]: Session 6 logged out. Waiting for processes to exit. May 9 00:40:30.890445 systemd[1]: session-6.scope: Deactivated successfully. May 9 00:40:30.891599 systemd-logind[1554]: Removed session 6. May 9 00:40:31.262940 kubelet[2704]: E0509 00:40:31.262402 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:32.263990 kubelet[2704]: E0509 00:40:32.263949 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:33.266166 kubelet[2704]: E0509 00:40:33.266125 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:40:35.894030 systemd[1]: Started sshd@6-10.0.0.133:22-10.0.0.1:48532.service - OpenSSH per-connection server daemon (10.0.0.1:48532). May 9 00:40:35.929074 sshd[3685]: Accepted publickey for core from 10.0.0.1 port 48532 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:40:35.930569 sshd[3685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:40:35.934235 systemd-logind[1554]: New session 7 of user core. May 9 00:40:35.943966 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 00:40:36.056027 sshd[3685]: pam_unix(sshd:session): session closed for user core May 9 00:40:36.059699 systemd[1]: sshd@6-10.0.0.133:22-10.0.0.1:48532.service: Deactivated successfully. May 9 00:40:36.062072 systemd-logind[1554]: Session 7 logged out. Waiting for processes to exit. May 9 00:40:36.062157 systemd[1]: session-7.scope: Deactivated successfully. May 9 00:40:36.063192 systemd-logind[1554]: Removed session 7. May 9 00:40:41.066942 systemd[1]: Started sshd@7-10.0.0.133:22-10.0.0.1:43310.service - OpenSSH per-connection server daemon (10.0.0.1:43310). May 9 00:40:41.100476 sshd[3723]: Accepted publickey for core from 10.0.0.1 port 43310 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:40:41.102343 sshd[3723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:40:41.106621 systemd-logind[1554]: New session 8 of user core. May 9 00:40:41.115035 systemd[1]: Started session-8.scope - Session 8 of User core. May 9 00:40:41.222952 sshd[3723]: pam_unix(sshd:session): session closed for user core May 9 00:40:41.232980 systemd[1]: Started sshd@8-10.0.0.133:22-10.0.0.1:43320.service - OpenSSH per-connection server daemon (10.0.0.1:43320). May 9 00:40:41.233564 systemd[1]: sshd@7-10.0.0.133:22-10.0.0.1:43310.service: Deactivated successfully. May 9 00:40:41.237589 systemd[1]: session-8.scope: Deactivated successfully. May 9 00:40:41.238863 systemd-logind[1554]: Session 8 logged out. Waiting for processes to exit. May 9 00:40:41.240121 systemd-logind[1554]: Removed session 8. May 9 00:40:41.268315 sshd[3736]: Accepted publickey for core from 10.0.0.1 port 43320 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:40:41.269900 sshd[3736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:40:41.273583 systemd-logind[1554]: New session 9 of user core. May 9 00:40:41.279979 systemd[1]: Started session-9.scope - Session 9 of User core. May 9 00:40:41.447520 sshd[3736]: pam_unix(sshd:session): session closed for user core May 9 00:40:41.454013 systemd[1]: Started sshd@9-10.0.0.133:22-10.0.0.1:43328.service - OpenSSH per-connection server daemon (10.0.0.1:43328). May 9 00:40:41.454521 systemd[1]: sshd@8-10.0.0.133:22-10.0.0.1:43320.service: Deactivated successfully. May 9 00:40:41.457743 systemd[1]: session-9.scope: Deactivated successfully. May 9 00:40:41.458561 systemd-logind[1554]: Session 9 logged out. Waiting for processes to exit. May 9 00:40:41.459557 systemd-logind[1554]: Removed session 9. May 9 00:40:41.490097 sshd[3749]: Accepted publickey for core from 10.0.0.1 port 43328 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:40:41.491767 sshd[3749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:40:41.495860 systemd-logind[1554]: New session 10 of user core. May 9 00:40:41.505067 systemd[1]: Started session-10.scope - Session 10 of User core. May 9 00:40:41.618828 sshd[3749]: pam_unix(sshd:session): session closed for user core May 9 00:40:41.623156 systemd[1]: sshd@9-10.0.0.133:22-10.0.0.1:43328.service: Deactivated successfully. May 9 00:40:41.626529 systemd[1]: session-10.scope: Deactivated successfully. May 9 00:40:41.627512 systemd-logind[1554]: Session 10 logged out. Waiting for processes to exit. May 9 00:40:41.628460 systemd-logind[1554]: Removed session 10. May 9 00:40:46.630000 systemd[1]: Started sshd@10-10.0.0.133:22-10.0.0.1:59162.service - OpenSSH per-connection server daemon (10.0.0.1:59162). May 9 00:40:46.664628 sshd[3788]: Accepted publickey for core from 10.0.0.1 port 59162 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:40:46.666284 sshd[3788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:40:46.670297 systemd-logind[1554]: New session 11 of user core. May 9 00:40:46.683046 systemd[1]: Started session-11.scope - Session 11 of User core. May 9 00:40:46.782978 sshd[3788]: pam_unix(sshd:session): session closed for user core May 9 00:40:46.786718 systemd[1]: sshd@10-10.0.0.133:22-10.0.0.1:59162.service: Deactivated successfully. May 9 00:40:46.789125 systemd-logind[1554]: Session 11 logged out. Waiting for processes to exit. May 9 00:40:46.789218 systemd[1]: session-11.scope: Deactivated successfully. May 9 00:40:46.790172 systemd-logind[1554]: Removed session 11. May 9 00:40:51.799028 systemd[1]: Started sshd@11-10.0.0.133:22-10.0.0.1:59172.service - OpenSSH per-connection server daemon (10.0.0.1:59172). May 9 00:40:51.831616 sshd[3826]: Accepted publickey for core from 10.0.0.1 port 59172 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:40:51.833640 sshd[3826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:40:51.838478 systemd-logind[1554]: New session 12 of user core. May 9 00:40:51.855148 systemd[1]: Started session-12.scope - Session 12 of User core. May 9 00:40:51.963287 sshd[3826]: pam_unix(sshd:session): session closed for user core May 9 00:40:51.967499 systemd[1]: sshd@11-10.0.0.133:22-10.0.0.1:59172.service: Deactivated successfully. May 9 00:40:51.969927 systemd[1]: session-12.scope: Deactivated successfully. May 9 00:40:51.970235 systemd-logind[1554]: Session 12 logged out. Waiting for processes to exit. May 9 00:40:51.971140 systemd-logind[1554]: Removed session 12. May 9 00:40:56.977957 systemd[1]: Started sshd@12-10.0.0.133:22-10.0.0.1:34930.service - OpenSSH per-connection server daemon (10.0.0.1:34930). May 9 00:40:57.010465 sshd[3863]: Accepted publickey for core from 10.0.0.1 port 34930 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:40:57.011896 sshd[3863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:40:57.015843 systemd-logind[1554]: New session 13 of user core. May 9 00:40:57.030981 systemd[1]: Started session-13.scope - Session 13 of User core. May 9 00:40:57.142816 sshd[3863]: pam_unix(sshd:session): session closed for user core May 9 00:40:57.146305 systemd[1]: sshd@12-10.0.0.133:22-10.0.0.1:34930.service: Deactivated successfully. May 9 00:40:57.148427 systemd[1]: session-13.scope: Deactivated successfully. May 9 00:40:57.148944 systemd-logind[1554]: Session 13 logged out. Waiting for processes to exit. May 9 00:40:57.149869 systemd-logind[1554]: Removed session 13. May 9 00:41:02.163141 systemd[1]: Started sshd@13-10.0.0.133:22-10.0.0.1:34944.service - OpenSSH per-connection server daemon (10.0.0.1:34944). May 9 00:41:02.199419 sshd[3900]: Accepted publickey for core from 10.0.0.1 port 34944 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:41:02.201610 sshd[3900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:41:02.207106 systemd-logind[1554]: New session 14 of user core. May 9 00:41:02.219253 systemd[1]: Started session-14.scope - Session 14 of User core. May 9 00:41:02.330015 sshd[3900]: pam_unix(sshd:session): session closed for user core May 9 00:41:02.338004 systemd[1]: Started sshd@14-10.0.0.133:22-10.0.0.1:34946.service - OpenSSH per-connection server daemon (10.0.0.1:34946). May 9 00:41:02.338763 systemd[1]: sshd@13-10.0.0.133:22-10.0.0.1:34944.service: Deactivated successfully. May 9 00:41:02.342196 systemd[1]: session-14.scope: Deactivated successfully. May 9 00:41:02.343348 systemd-logind[1554]: Session 14 logged out. Waiting for processes to exit. May 9 00:41:02.344411 systemd-logind[1554]: Removed session 14. May 9 00:41:02.372805 sshd[3913]: Accepted publickey for core from 10.0.0.1 port 34946 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:41:02.374495 sshd[3913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:41:02.378780 systemd-logind[1554]: New session 15 of user core. May 9 00:41:02.389304 systemd[1]: Started session-15.scope - Session 15 of User core. May 9 00:41:02.578198 sshd[3913]: pam_unix(sshd:session): session closed for user core May 9 00:41:02.585954 systemd[1]: Started sshd@15-10.0.0.133:22-10.0.0.1:34962.service - OpenSSH per-connection server daemon (10.0.0.1:34962). May 9 00:41:02.586431 systemd[1]: sshd@14-10.0.0.133:22-10.0.0.1:34946.service: Deactivated successfully. May 9 00:41:02.590226 systemd[1]: session-15.scope: Deactivated successfully. May 9 00:41:02.591240 systemd-logind[1554]: Session 15 logged out. Waiting for processes to exit. May 9 00:41:02.592476 systemd-logind[1554]: Removed session 15. May 9 00:41:02.620569 sshd[3927]: Accepted publickey for core from 10.0.0.1 port 34962 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:41:02.622045 sshd[3927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:41:02.626184 systemd-logind[1554]: New session 16 of user core. May 9 00:41:02.635043 systemd[1]: Started session-16.scope - Session 16 of User core. May 9 00:41:03.997493 sshd[3927]: pam_unix(sshd:session): session closed for user core May 9 00:41:04.010418 systemd[1]: Started sshd@16-10.0.0.133:22-10.0.0.1:34972.service - OpenSSH per-connection server daemon (10.0.0.1:34972). May 9 00:41:04.011077 systemd[1]: sshd@15-10.0.0.133:22-10.0.0.1:34962.service: Deactivated successfully. May 9 00:41:04.015147 systemd[1]: session-16.scope: Deactivated successfully. May 9 00:41:04.017204 systemd-logind[1554]: Session 16 logged out. Waiting for processes to exit. May 9 00:41:04.018379 systemd-logind[1554]: Removed session 16. May 9 00:41:04.050463 sshd[3947]: Accepted publickey for core from 10.0.0.1 port 34972 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:41:04.052065 sshd[3947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:41:04.056309 systemd-logind[1554]: New session 17 of user core. May 9 00:41:04.065057 systemd[1]: Started session-17.scope - Session 17 of User core. May 9 00:41:04.263370 sshd[3947]: pam_unix(sshd:session): session closed for user core May 9 00:41:04.274123 systemd[1]: Started sshd@17-10.0.0.133:22-10.0.0.1:34986.service - OpenSSH per-connection server daemon (10.0.0.1:34986). May 9 00:41:04.274812 systemd[1]: sshd@16-10.0.0.133:22-10.0.0.1:34972.service: Deactivated successfully. May 9 00:41:04.278256 systemd-logind[1554]: Session 17 logged out. Waiting for processes to exit. May 9 00:41:04.279431 systemd[1]: session-17.scope: Deactivated successfully. May 9 00:41:04.280683 systemd-logind[1554]: Removed session 17. May 9 00:41:04.307847 sshd[3961]: Accepted publickey for core from 10.0.0.1 port 34986 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:41:04.309629 sshd[3961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:41:04.314014 systemd-logind[1554]: New session 18 of user core. May 9 00:41:04.328983 systemd[1]: Started session-18.scope - Session 18 of User core. May 9 00:41:04.438168 sshd[3961]: pam_unix(sshd:session): session closed for user core May 9 00:41:04.443357 systemd[1]: sshd@17-10.0.0.133:22-10.0.0.1:34986.service: Deactivated successfully. May 9 00:41:04.446572 systemd-logind[1554]: Session 18 logged out. Waiting for processes to exit. May 9 00:41:04.446748 systemd[1]: session-18.scope: Deactivated successfully. May 9 00:41:04.447917 systemd-logind[1554]: Removed session 18. May 9 00:41:07.998657 kubelet[2704]: E0509 00:41:07.998571 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:41:09.453969 systemd[1]: Started sshd@18-10.0.0.133:22-10.0.0.1:54100.service - OpenSSH per-connection server daemon (10.0.0.1:54100). May 9 00:41:09.487234 sshd[4002]: Accepted publickey for core from 10.0.0.1 port 54100 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:41:09.488870 sshd[4002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:41:09.493399 systemd-logind[1554]: New session 19 of user core. May 9 00:41:09.514165 systemd[1]: Started session-19.scope - Session 19 of User core. May 9 00:41:09.619657 sshd[4002]: pam_unix(sshd:session): session closed for user core May 9 00:41:09.624531 systemd[1]: sshd@18-10.0.0.133:22-10.0.0.1:54100.service: Deactivated successfully. May 9 00:41:09.627062 systemd-logind[1554]: Session 19 logged out. Waiting for processes to exit. May 9 00:41:09.627157 systemd[1]: session-19.scope: Deactivated successfully. May 9 00:41:09.628178 systemd-logind[1554]: Removed session 19. May 9 00:41:09.997962 kubelet[2704]: E0509 00:41:09.997915 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:41:14.635122 systemd[1]: Started sshd@19-10.0.0.133:22-10.0.0.1:54108.service - OpenSSH per-connection server daemon (10.0.0.1:54108). May 9 00:41:14.670700 sshd[4041]: Accepted publickey for core from 10.0.0.1 port 54108 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:41:14.672703 sshd[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:41:14.677042 systemd-logind[1554]: New session 20 of user core. May 9 00:41:14.687096 systemd[1]: Started session-20.scope - Session 20 of User core. May 9 00:41:14.800878 sshd[4041]: pam_unix(sshd:session): session closed for user core May 9 00:41:14.805517 systemd[1]: sshd@19-10.0.0.133:22-10.0.0.1:54108.service: Deactivated successfully. May 9 00:41:14.807653 systemd-logind[1554]: Session 20 logged out. Waiting for processes to exit. May 9 00:41:14.807659 systemd[1]: session-20.scope: Deactivated successfully. May 9 00:41:14.809009 systemd-logind[1554]: Removed session 20. May 9 00:41:14.998092 kubelet[2704]: E0509 00:41:14.997946 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:41:16.998449 kubelet[2704]: E0509 00:41:16.998366 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:41:19.815941 systemd[1]: Started sshd@20-10.0.0.133:22-10.0.0.1:60640.service - OpenSSH per-connection server daemon (10.0.0.1:60640). May 9 00:41:19.849159 sshd[4077]: Accepted publickey for core from 10.0.0.1 port 60640 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:41:19.850997 sshd[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:41:19.854859 systemd-logind[1554]: New session 21 of user core. May 9 00:41:19.861979 systemd[1]: Started session-21.scope - Session 21 of User core. May 9 00:41:19.961475 sshd[4077]: pam_unix(sshd:session): session closed for user core May 9 00:41:19.965224 systemd[1]: sshd@20-10.0.0.133:22-10.0.0.1:60640.service: Deactivated successfully. May 9 00:41:19.967233 systemd-logind[1554]: Session 21 logged out. Waiting for processes to exit. May 9 00:41:19.967365 systemd[1]: session-21.scope: Deactivated successfully. May 9 00:41:19.968407 systemd-logind[1554]: Removed session 21. May 9 00:41:24.980953 systemd[1]: Started sshd@21-10.0.0.133:22-10.0.0.1:60650.service - OpenSSH per-connection server daemon (10.0.0.1:60650). May 9 00:41:25.013804 sshd[4113]: Accepted publickey for core from 10.0.0.1 port 60650 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:41:25.015327 sshd[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:41:25.018818 systemd-logind[1554]: New session 22 of user core. May 9 00:41:25.028982 systemd[1]: Started session-22.scope - Session 22 of User core. May 9 00:41:25.131595 sshd[4113]: pam_unix(sshd:session): session closed for user core May 9 00:41:25.135775 systemd[1]: sshd@21-10.0.0.133:22-10.0.0.1:60650.service: Deactivated successfully. May 9 00:41:25.138583 systemd-logind[1554]: Session 22 logged out. Waiting for processes to exit. May 9 00:41:25.138723 systemd[1]: session-22.scope: Deactivated successfully. May 9 00:41:25.139844 systemd-logind[1554]: Removed session 22.