Mar 13 00:56:42.929535 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 12 22:08:29 -00 2026 Mar 13 00:56:42.929559 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:56:42.929568 kernel: BIOS-provided physical RAM map: Mar 13 00:56:42.929574 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Mar 13 00:56:42.929580 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Mar 13 00:56:42.929586 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 13 00:56:42.929596 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Mar 13 00:56:42.929602 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Mar 13 00:56:42.929608 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 13 00:56:42.929614 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 13 00:56:42.929620 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 13 00:56:42.929626 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 13 00:56:42.929632 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Mar 13 00:56:42.929638 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 13 00:56:42.929647 kernel: NX (Execute Disable) protection: active Mar 13 00:56:42.929654 kernel: APIC: Static calls initialized Mar 13 00:56:42.929660 kernel: SMBIOS 2.8 present. Mar 13 00:56:42.929666 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Mar 13 00:56:42.929672 kernel: DMI: Memory slots populated: 1/1 Mar 13 00:56:42.929679 kernel: Hypervisor detected: KVM Mar 13 00:56:42.929687 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Mar 13 00:56:42.929693 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 13 00:56:42.929699 kernel: kvm-clock: using sched offset of 7277536610 cycles Mar 13 00:56:42.929705 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 13 00:56:42.929712 kernel: tsc: Detected 2000.000 MHz processor Mar 13 00:56:42.929719 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 13 00:56:42.929725 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 13 00:56:42.929732 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Mar 13 00:56:42.929739 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 13 00:56:42.929745 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 13 00:56:42.929753 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Mar 13 00:56:42.929760 kernel: Using GB pages for direct mapping Mar 13 00:56:42.929766 kernel: ACPI: Early table checksum verification disabled Mar 13 00:56:42.929772 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Mar 13 00:56:42.929779 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:56:42.929785 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:56:42.929792 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:56:42.929798 kernel: ACPI: FACS 0x000000007FFE0000 000040 Mar 13 00:56:42.929804 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:56:42.929813 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:56:42.929823 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:56:42.929829 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:56:42.929836 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Mar 13 00:56:42.929843 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Mar 13 00:56:42.929852 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Mar 13 00:56:42.929858 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Mar 13 00:56:42.929865 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Mar 13 00:56:42.929872 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Mar 13 00:56:42.929878 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Mar 13 00:56:42.929885 kernel: No NUMA configuration found Mar 13 00:56:42.929891 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Mar 13 00:56:42.929898 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Mar 13 00:56:42.929905 kernel: Zone ranges: Mar 13 00:56:42.929914 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 13 00:56:42.929920 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 13 00:56:42.929927 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Mar 13 00:56:42.929933 kernel: Device empty Mar 13 00:56:42.929940 kernel: Movable zone start for each node Mar 13 00:56:42.929946 kernel: Early memory node ranges Mar 13 00:56:42.929953 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 13 00:56:42.929960 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Mar 13 00:56:42.929966 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Mar 13 00:56:42.929973 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Mar 13 00:56:42.929982 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 13 00:56:42.929989 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 13 00:56:42.929995 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Mar 13 00:56:42.930002 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 13 00:56:42.930008 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 13 00:56:42.930015 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 13 00:56:42.930022 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 13 00:56:42.930029 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 13 00:56:42.930035 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 13 00:56:42.930044 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 13 00:56:42.930051 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 13 00:56:42.930057 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 13 00:56:42.930064 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 13 00:56:42.930071 kernel: TSC deadline timer available Mar 13 00:56:42.930077 kernel: CPU topo: Max. logical packages: 1 Mar 13 00:56:42.930084 kernel: CPU topo: Max. logical dies: 1 Mar 13 00:56:42.930090 kernel: CPU topo: Max. dies per package: 1 Mar 13 00:56:42.932123 kernel: CPU topo: Max. threads per core: 1 Mar 13 00:56:42.932140 kernel: CPU topo: Num. cores per package: 2 Mar 13 00:56:42.932147 kernel: CPU topo: Num. threads per package: 2 Mar 13 00:56:42.932154 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Mar 13 00:56:42.932161 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 13 00:56:42.932168 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 13 00:56:42.932175 kernel: kvm-guest: setup PV sched yield Mar 13 00:56:42.932182 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 13 00:56:42.932188 kernel: Booting paravirtualized kernel on KVM Mar 13 00:56:42.932195 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 13 00:56:42.932204 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 13 00:56:42.932211 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Mar 13 00:56:42.932218 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Mar 13 00:56:42.932224 kernel: pcpu-alloc: [0] 0 1 Mar 13 00:56:42.932231 kernel: kvm-guest: PV spinlocks enabled Mar 13 00:56:42.932238 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 13 00:56:42.932245 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:56:42.932253 kernel: random: crng init done Mar 13 00:56:42.932261 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 13 00:56:42.932268 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 13 00:56:42.932275 kernel: Fallback order for Node 0: 0 Mar 13 00:56:42.932282 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Mar 13 00:56:42.932288 kernel: Policy zone: Normal Mar 13 00:56:42.932295 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 13 00:56:42.932301 kernel: software IO TLB: area num 2. Mar 13 00:56:42.932308 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 13 00:56:42.932315 kernel: ftrace: allocating 40099 entries in 157 pages Mar 13 00:56:42.932324 kernel: ftrace: allocated 157 pages with 5 groups Mar 13 00:56:42.932330 kernel: Dynamic Preempt: voluntary Mar 13 00:56:42.932337 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 13 00:56:42.932344 kernel: rcu: RCU event tracing is enabled. Mar 13 00:56:42.932351 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 13 00:56:42.932358 kernel: Trampoline variant of Tasks RCU enabled. Mar 13 00:56:42.932365 kernel: Rude variant of Tasks RCU enabled. Mar 13 00:56:42.932372 kernel: Tracing variant of Tasks RCU enabled. Mar 13 00:56:42.932379 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 13 00:56:42.932385 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 13 00:56:42.932394 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:56:42.932408 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:56:42.932417 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:56:42.932424 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 13 00:56:42.932431 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 13 00:56:42.932438 kernel: Console: colour VGA+ 80x25 Mar 13 00:56:42.932445 kernel: printk: legacy console [tty0] enabled Mar 13 00:56:42.932452 kernel: printk: legacy console [ttyS0] enabled Mar 13 00:56:42.932459 kernel: ACPI: Core revision 20240827 Mar 13 00:56:42.932468 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 13 00:56:42.932475 kernel: APIC: Switch to symmetric I/O mode setup Mar 13 00:56:42.932482 kernel: x2apic enabled Mar 13 00:56:42.932489 kernel: APIC: Switched APIC routing to: physical x2apic Mar 13 00:56:42.932496 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 13 00:56:42.932503 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 13 00:56:42.932510 kernel: kvm-guest: setup PV IPIs Mar 13 00:56:42.932520 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 13 00:56:42.932527 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Mar 13 00:56:42.932534 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Mar 13 00:56:42.932541 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 13 00:56:42.932548 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 13 00:56:42.932555 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 13 00:56:42.932562 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 13 00:56:42.932569 kernel: Spectre V2 : Mitigation: Retpolines Mar 13 00:56:42.932576 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 13 00:56:42.932585 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 13 00:56:42.932592 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 13 00:56:42.932599 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 13 00:56:42.932606 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 13 00:56:42.932614 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 13 00:56:42.932621 kernel: active return thunk: srso_alias_return_thunk Mar 13 00:56:42.932628 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 13 00:56:42.932635 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 13 00:56:42.932644 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 13 00:56:42.932651 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 13 00:56:42.932658 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 13 00:56:42.932665 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 13 00:56:42.932672 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 13 00:56:42.932679 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 13 00:56:42.932686 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Mar 13 00:56:42.932693 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Mar 13 00:56:42.932700 kernel: Freeing SMP alternatives memory: 32K Mar 13 00:56:42.932709 kernel: pid_max: default: 32768 minimum: 301 Mar 13 00:56:42.932716 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 13 00:56:42.932723 kernel: landlock: Up and running. Mar 13 00:56:42.932729 kernel: SELinux: Initializing. Mar 13 00:56:42.932736 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 00:56:42.932744 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 00:56:42.932751 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 13 00:56:42.932758 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 13 00:56:42.932765 kernel: ... version: 0 Mar 13 00:56:42.932774 kernel: ... bit width: 48 Mar 13 00:56:42.932780 kernel: ... generic registers: 6 Mar 13 00:56:42.932787 kernel: ... value mask: 0000ffffffffffff Mar 13 00:56:42.932794 kernel: ... max period: 00007fffffffffff Mar 13 00:56:42.932801 kernel: ... fixed-purpose events: 0 Mar 13 00:56:42.932808 kernel: ... event mask: 000000000000003f Mar 13 00:56:42.932815 kernel: signal: max sigframe size: 3376 Mar 13 00:56:42.932822 kernel: rcu: Hierarchical SRCU implementation. Mar 13 00:56:42.932829 kernel: rcu: Max phase no-delay instances is 400. Mar 13 00:56:42.933013 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 13 00:56:42.933020 kernel: smp: Bringing up secondary CPUs ... Mar 13 00:56:42.933026 kernel: smpboot: x86: Booting SMP configuration: Mar 13 00:56:42.933033 kernel: .... node #0, CPUs: #1 Mar 13 00:56:42.933040 kernel: smp: Brought up 1 node, 2 CPUs Mar 13 00:56:42.933047 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Mar 13 00:56:42.933055 kernel: Memory: 3952856K/4193772K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 235488K reserved, 0K cma-reserved) Mar 13 00:56:42.933061 kernel: devtmpfs: initialized Mar 13 00:56:42.933068 kernel: x86/mm: Memory block size: 128MB Mar 13 00:56:42.933077 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 13 00:56:42.933084 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 13 00:56:42.933091 kernel: pinctrl core: initialized pinctrl subsystem Mar 13 00:56:42.933098 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 13 00:56:42.933118 kernel: audit: initializing netlink subsys (disabled) Mar 13 00:56:42.933125 kernel: audit: type=2000 audit(1773363400.449:1): state=initialized audit_enabled=0 res=1 Mar 13 00:56:42.933132 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 13 00:56:42.933139 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 13 00:56:42.933147 kernel: cpuidle: using governor menu Mar 13 00:56:42.933156 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 13 00:56:42.933163 kernel: dca service started, version 1.12.1 Mar 13 00:56:42.933170 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Mar 13 00:56:42.933177 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 13 00:56:42.933184 kernel: PCI: Using configuration type 1 for base access Mar 13 00:56:42.933191 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 13 00:56:42.933198 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 13 00:56:42.933205 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 13 00:56:42.933212 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 13 00:56:42.933222 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 13 00:56:42.933228 kernel: ACPI: Added _OSI(Module Device) Mar 13 00:56:42.933235 kernel: ACPI: Added _OSI(Processor Device) Mar 13 00:56:42.933242 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 13 00:56:42.933249 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 13 00:56:42.933256 kernel: ACPI: Interpreter enabled Mar 13 00:56:42.933263 kernel: ACPI: PM: (supports S0 S3 S5) Mar 13 00:56:42.933270 kernel: ACPI: Using IOAPIC for interrupt routing Mar 13 00:56:42.933277 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 13 00:56:42.933286 kernel: PCI: Using E820 reservations for host bridge windows Mar 13 00:56:42.933293 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 13 00:56:42.933300 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 13 00:56:42.933496 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 13 00:56:42.933628 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 13 00:56:42.933753 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 13 00:56:42.933762 kernel: PCI host bridge to bus 0000:00 Mar 13 00:56:42.933895 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 13 00:56:42.934010 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 13 00:56:42.934175 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 13 00:56:42.934294 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 13 00:56:42.934410 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 13 00:56:42.934523 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Mar 13 00:56:42.934707 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 13 00:56:42.934898 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 13 00:56:42.935067 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 13 00:56:42.935219 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Mar 13 00:56:42.935361 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Mar 13 00:56:42.935485 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Mar 13 00:56:42.935607 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 13 00:56:42.935741 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Mar 13 00:56:42.935869 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Mar 13 00:56:42.935990 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Mar 13 00:56:42.938159 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Mar 13 00:56:42.938312 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 13 00:56:42.938438 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Mar 13 00:56:42.938563 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Mar 13 00:56:42.938735 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Mar 13 00:56:42.938862 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Mar 13 00:56:42.939001 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 13 00:56:42.939238 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 13 00:56:42.939376 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 13 00:56:42.939498 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Mar 13 00:56:42.939619 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Mar 13 00:56:42.939756 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 13 00:56:42.939921 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Mar 13 00:56:42.939933 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 13 00:56:42.939940 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 13 00:56:42.939947 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 13 00:56:42.939955 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 13 00:56:42.939961 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 13 00:56:42.939969 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 13 00:56:42.939979 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 13 00:56:42.939986 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 13 00:56:42.939993 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 13 00:56:42.940000 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 13 00:56:42.940007 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 13 00:56:42.940014 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 13 00:56:42.940021 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 13 00:56:42.940028 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 13 00:56:42.940035 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 13 00:56:42.940044 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 13 00:56:42.940051 kernel: iommu: Default domain type: Translated Mar 13 00:56:42.940058 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 13 00:56:42.940066 kernel: PCI: Using ACPI for IRQ routing Mar 13 00:56:42.940073 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 13 00:56:42.940080 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Mar 13 00:56:42.940087 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Mar 13 00:56:42.940237 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 13 00:56:42.940364 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 13 00:56:42.940485 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 13 00:56:42.940494 kernel: vgaarb: loaded Mar 13 00:56:42.940501 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 13 00:56:42.940508 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 13 00:56:42.940516 kernel: clocksource: Switched to clocksource kvm-clock Mar 13 00:56:42.940523 kernel: VFS: Disk quotas dquot_6.6.0 Mar 13 00:56:42.940530 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 13 00:56:42.940537 kernel: pnp: PnP ACPI init Mar 13 00:56:42.940679 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 13 00:56:42.940690 kernel: pnp: PnP ACPI: found 5 devices Mar 13 00:56:42.940697 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 13 00:56:42.940704 kernel: NET: Registered PF_INET protocol family Mar 13 00:56:42.940711 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 13 00:56:42.940718 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 13 00:56:42.940725 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 13 00:56:42.940732 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 13 00:56:42.940743 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 13 00:56:42.940750 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 13 00:56:42.940757 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 00:56:42.940764 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 00:56:42.940771 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 13 00:56:42.940778 kernel: NET: Registered PF_XDP protocol family Mar 13 00:56:42.940893 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 13 00:56:42.941005 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 13 00:56:42.941138 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 13 00:56:42.941258 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 13 00:56:42.941369 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 13 00:56:42.941480 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Mar 13 00:56:42.941489 kernel: PCI: CLS 0 bytes, default 64 Mar 13 00:56:42.941497 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 13 00:56:42.941504 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Mar 13 00:56:42.941511 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Mar 13 00:56:42.941518 kernel: Initialise system trusted keyrings Mar 13 00:56:42.941528 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 13 00:56:42.941535 kernel: Key type asymmetric registered Mar 13 00:56:42.941542 kernel: Asymmetric key parser 'x509' registered Mar 13 00:56:42.941549 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 13 00:56:42.941557 kernel: io scheduler mq-deadline registered Mar 13 00:56:42.941563 kernel: io scheduler kyber registered Mar 13 00:56:42.941570 kernel: io scheduler bfq registered Mar 13 00:56:42.941577 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 13 00:56:42.941585 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 13 00:56:42.941594 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 13 00:56:42.941601 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 13 00:56:42.941608 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 13 00:56:42.941616 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 13 00:56:42.941623 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 13 00:56:42.941630 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 13 00:56:42.941767 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 13 00:56:42.941778 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Mar 13 00:56:42.941893 kernel: rtc_cmos 00:03: registered as rtc0 Mar 13 00:56:42.942012 kernel: rtc_cmos 00:03: setting system clock to 2026-03-13T00:56:42 UTC (1773363402) Mar 13 00:56:42.943132 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 13 00:56:42.943147 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 13 00:56:42.943155 kernel: NET: Registered PF_INET6 protocol family Mar 13 00:56:42.943162 kernel: Segment Routing with IPv6 Mar 13 00:56:42.943170 kernel: In-situ OAM (IOAM) with IPv6 Mar 13 00:56:42.943177 kernel: NET: Registered PF_PACKET protocol family Mar 13 00:56:42.943184 kernel: Key type dns_resolver registered Mar 13 00:56:42.943194 kernel: IPI shorthand broadcast: enabled Mar 13 00:56:42.943202 kernel: sched_clock: Marking stable (3031004950, 352677380)->(3481466090, -97783760) Mar 13 00:56:42.943209 kernel: registered taskstats version 1 Mar 13 00:56:42.943216 kernel: Loading compiled-in X.509 certificates Mar 13 00:56:42.943223 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 5aff49df330f42445474818d085d5033fee752d8' Mar 13 00:56:42.943230 kernel: Demotion targets for Node 0: null Mar 13 00:56:42.943237 kernel: Key type .fscrypt registered Mar 13 00:56:42.943244 kernel: Key type fscrypt-provisioning registered Mar 13 00:56:42.943252 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 13 00:56:42.943261 kernel: ima: Allocated hash algorithm: sha1 Mar 13 00:56:42.943269 kernel: ima: No architecture policies found Mar 13 00:56:42.943276 kernel: clk: Disabling unused clocks Mar 13 00:56:42.943282 kernel: Warning: unable to open an initial console. Mar 13 00:56:42.943290 kernel: Freeing unused kernel image (initmem) memory: 46200K Mar 13 00:56:42.943297 kernel: Write protecting the kernel read-only data: 40960k Mar 13 00:56:42.943304 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 13 00:56:42.943311 kernel: Run /init as init process Mar 13 00:56:42.943319 kernel: with arguments: Mar 13 00:56:42.943328 kernel: /init Mar 13 00:56:42.943336 kernel: with environment: Mar 13 00:56:42.943360 kernel: HOME=/ Mar 13 00:56:42.943370 kernel: TERM=linux Mar 13 00:56:42.943378 systemd[1]: Successfully made /usr/ read-only. Mar 13 00:56:42.943388 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:56:42.943396 systemd[1]: Detected virtualization kvm. Mar 13 00:56:42.943406 systemd[1]: Detected architecture x86-64. Mar 13 00:56:42.943414 systemd[1]: Running in initrd. Mar 13 00:56:42.943421 systemd[1]: No hostname configured, using default hostname. Mar 13 00:56:42.943429 systemd[1]: Hostname set to . Mar 13 00:56:42.943437 systemd[1]: Initializing machine ID from random generator. Mar 13 00:56:42.943445 systemd[1]: Queued start job for default target initrd.target. Mar 13 00:56:42.943452 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:56:42.943460 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:56:42.943471 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 13 00:56:42.943479 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:56:42.943487 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 13 00:56:42.943495 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 13 00:56:42.943504 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 13 00:56:42.943512 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 13 00:56:42.943520 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:56:42.943530 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:56:42.943538 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:56:42.943545 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:56:42.943553 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:56:42.943560 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:56:42.943568 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:56:42.943576 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:56:42.943583 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 13 00:56:42.943591 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 13 00:56:42.943602 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:56:42.943613 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:56:42.943622 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:56:42.943630 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:56:42.943638 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 13 00:56:42.943648 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:56:42.943656 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 13 00:56:42.943664 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 13 00:56:42.943672 systemd[1]: Starting systemd-fsck-usr.service... Mar 13 00:56:42.943679 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:56:42.943687 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:56:42.943695 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:56:42.943724 systemd-journald[187]: Collecting audit messages is disabled. Mar 13 00:56:42.943746 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 13 00:56:42.943757 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:56:42.943765 systemd-journald[187]: Journal started Mar 13 00:56:42.943782 systemd-journald[187]: Runtime Journal (/run/log/journal/57c5d69dc58f428d942a6139edb1570b) is 8M, max 78.2M, 70.2M free. Mar 13 00:56:42.954158 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:56:42.952339 systemd[1]: Finished systemd-fsck-usr.service. Mar 13 00:56:42.954545 systemd-modules-load[188]: Inserted module 'overlay' Mar 13 00:56:42.961505 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 00:56:42.965222 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:56:43.085186 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 13 00:56:43.085215 kernel: Bridge firewalling registered Mar 13 00:56:42.992379 systemd-modules-load[188]: Inserted module 'br_netfilter' Mar 13 00:56:43.089292 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:56:43.090434 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:56:43.093042 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 00:56:43.097234 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 00:56:43.097743 systemd-tmpfiles[200]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 13 00:56:43.103224 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:56:43.106764 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:56:43.113272 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:56:43.122463 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:56:43.128234 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:56:43.130816 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:56:43.135691 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:56:43.144025 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 13 00:56:43.171655 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:56:43.181210 systemd-resolved[217]: Positive Trust Anchors: Mar 13 00:56:43.181228 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:56:43.181254 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:56:43.185394 systemd-resolved[217]: Defaulting to hostname 'linux'. Mar 13 00:56:43.186704 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:56:43.189224 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:56:43.289154 kernel: SCSI subsystem initialized Mar 13 00:56:43.299137 kernel: Loading iSCSI transport class v2.0-870. Mar 13 00:56:43.311214 kernel: iscsi: registered transport (tcp) Mar 13 00:56:43.331137 kernel: iscsi: registered transport (qla4xxx) Mar 13 00:56:43.331170 kernel: QLogic iSCSI HBA Driver Mar 13 00:56:43.356419 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:56:43.378357 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:56:43.380242 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:56:43.446549 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 13 00:56:43.450040 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 13 00:56:43.506133 kernel: raid6: avx2x4 gen() 25353 MB/s Mar 13 00:56:43.524133 kernel: raid6: avx2x2 gen() 24112 MB/s Mar 13 00:56:43.542196 kernel: raid6: avx2x1 gen() 17126 MB/s Mar 13 00:56:43.542215 kernel: raid6: using algorithm avx2x4 gen() 25353 MB/s Mar 13 00:56:43.562376 kernel: raid6: .... xor() 3185 MB/s, rmw enabled Mar 13 00:56:43.562402 kernel: raid6: using avx2x2 recovery algorithm Mar 13 00:56:43.584138 kernel: xor: automatically using best checksumming function avx Mar 13 00:56:43.722166 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 13 00:56:43.730382 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:56:43.733060 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:56:43.761970 systemd-udevd[435]: Using default interface naming scheme 'v255'. Mar 13 00:56:43.768026 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:56:43.771614 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 13 00:56:43.793684 dracut-pre-trigger[445]: rd.md=0: removing MD RAID activation Mar 13 00:56:43.820320 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:56:43.823394 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:56:43.904928 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:56:43.909297 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 13 00:56:43.969131 kernel: cryptd: max_cpu_qlen set to 1000 Mar 13 00:56:44.209141 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 13 00:56:44.219177 kernel: AES CTR mode by8 optimization enabled Mar 13 00:56:44.247120 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Mar 13 00:56:44.260216 kernel: scsi host0: Virtio SCSI HBA Mar 13 00:56:44.259787 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:56:44.260180 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:56:44.277847 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 13 00:56:44.268908 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:56:44.281205 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:56:44.282424 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:56:44.289139 kernel: libata version 3.00 loaded. Mar 13 00:56:44.313316 kernel: ahci 0000:00:1f.2: version 3.0 Mar 13 00:56:44.313537 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 13 00:56:44.317738 kernel: sd 0:0:0:0: Power-on or device reset occurred Mar 13 00:56:44.318205 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Mar 13 00:56:44.321408 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 13 00:56:44.324212 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Mar 13 00:56:44.324462 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 13 00:56:44.326130 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 13 00:56:44.326308 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 13 00:56:44.326455 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 13 00:56:44.329812 kernel: scsi host1: ahci Mar 13 00:56:44.330035 kernel: scsi host2: ahci Mar 13 00:56:44.331626 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 13 00:56:44.331665 kernel: GPT:9289727 != 167739391 Mar 13 00:56:44.331677 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 13 00:56:44.331688 kernel: GPT:9289727 != 167739391 Mar 13 00:56:44.331697 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 13 00:56:44.331707 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 00:56:44.331717 kernel: scsi host3: ahci Mar 13 00:56:44.331891 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 13 00:56:44.333137 kernel: scsi host4: ahci Mar 13 00:56:44.333315 kernel: scsi host5: ahci Mar 13 00:56:44.334598 kernel: scsi host6: ahci Mar 13 00:56:44.334781 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 1 Mar 13 00:56:44.334793 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 1 Mar 13 00:56:44.334803 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 1 Mar 13 00:56:44.334818 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 1 Mar 13 00:56:44.334829 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 1 Mar 13 00:56:44.334839 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 1 Mar 13 00:56:44.387478 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 13 00:56:44.511165 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:56:44.538278 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 13 00:56:44.546522 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 13 00:56:44.547362 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 13 00:56:44.557176 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 13 00:56:44.560277 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 13 00:56:44.580094 disk-uuid[605]: Primary Header is updated. Mar 13 00:56:44.580094 disk-uuid[605]: Secondary Entries is updated. Mar 13 00:56:44.580094 disk-uuid[605]: Secondary Header is updated. Mar 13 00:56:44.591169 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 00:56:44.605143 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 00:56:44.647124 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 13 00:56:44.647199 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 13 00:56:44.647213 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 13 00:56:44.652618 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 13 00:56:44.652656 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 13 00:56:44.654449 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 13 00:56:44.773930 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 13 00:56:44.791845 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:56:44.792724 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:56:44.794571 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:56:44.798333 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 13 00:56:44.820963 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:56:45.610240 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 00:56:45.610312 disk-uuid[606]: The operation has completed successfully. Mar 13 00:56:45.664758 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 13 00:56:45.664918 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 13 00:56:45.699970 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 13 00:56:45.713638 sh[633]: Success Mar 13 00:56:45.734204 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 13 00:56:45.734246 kernel: device-mapper: uevent: version 1.0.3 Mar 13 00:56:45.737336 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 13 00:56:45.752530 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 13 00:56:45.797626 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 13 00:56:45.800523 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 13 00:56:45.812547 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 13 00:56:45.826142 kernel: BTRFS: device fsid 503642f8-c59c-4168-97a8-9c3603183fa3 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (645) Mar 13 00:56:45.833080 kernel: BTRFS info (device dm-0): first mount of filesystem 503642f8-c59c-4168-97a8-9c3603183fa3 Mar 13 00:56:45.833118 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:56:45.844139 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Mar 13 00:56:45.844203 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 13 00:56:45.846412 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 13 00:56:45.850521 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 13 00:56:45.851810 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:56:45.853013 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 13 00:56:45.853795 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 13 00:56:45.858470 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 13 00:56:45.892122 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (676) Mar 13 00:56:45.898401 kernel: BTRFS info (device sda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:56:45.898460 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:56:45.910096 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 13 00:56:45.910151 kernel: BTRFS info (device sda6): turning on async discard Mar 13 00:56:45.910162 kernel: BTRFS info (device sda6): enabling free space tree Mar 13 00:56:45.919141 kernel: BTRFS info (device sda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:56:45.920455 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 13 00:56:45.924704 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 13 00:56:46.041465 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:56:46.044459 ignition[740]: Ignition 2.22.0 Mar 13 00:56:46.044475 ignition[740]: Stage: fetch-offline Mar 13 00:56:46.044510 ignition[740]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:56:46.048184 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:56:46.044521 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 13 00:56:46.051785 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:56:46.044624 ignition[740]: parsed url from cmdline: "" Mar 13 00:56:46.044629 ignition[740]: no config URL provided Mar 13 00:56:46.044636 ignition[740]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:56:46.044646 ignition[740]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:56:46.044653 ignition[740]: failed to fetch config: resource requires networking Mar 13 00:56:46.044818 ignition[740]: Ignition finished successfully Mar 13 00:56:46.083988 systemd-networkd[819]: lo: Link UP Mar 13 00:56:46.084004 systemd-networkd[819]: lo: Gained carrier Mar 13 00:56:46.085870 systemd-networkd[819]: Enumeration completed Mar 13 00:56:46.086752 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:56:46.087049 systemd-networkd[819]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:56:46.087054 systemd-networkd[819]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:56:46.088335 systemd[1]: Reached target network.target - Network. Mar 13 00:56:46.090185 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 13 00:56:46.090492 systemd-networkd[819]: eth0: Link UP Mar 13 00:56:46.090652 systemd-networkd[819]: eth0: Gained carrier Mar 13 00:56:46.090678 systemd-networkd[819]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:56:46.124823 ignition[823]: Ignition 2.22.0 Mar 13 00:56:46.124840 ignition[823]: Stage: fetch Mar 13 00:56:46.125241 ignition[823]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:56:46.125253 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 13 00:56:46.125351 ignition[823]: parsed url from cmdline: "" Mar 13 00:56:46.125356 ignition[823]: no config URL provided Mar 13 00:56:46.125361 ignition[823]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:56:46.125370 ignition[823]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:56:46.125413 ignition[823]: PUT http://169.254.169.254/v1/token: attempt #1 Mar 13 00:56:46.125609 ignition[823]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 13 00:56:46.325729 ignition[823]: PUT http://169.254.169.254/v1/token: attempt #2 Mar 13 00:56:46.326214 ignition[823]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 13 00:56:46.726580 ignition[823]: PUT http://169.254.169.254/v1/token: attempt #3 Mar 13 00:56:46.727481 ignition[823]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 13 00:56:46.816181 systemd-networkd[819]: eth0: DHCPv4 address 172.236.110.174/24, gateway 172.236.110.1 acquired from 23.205.167.177 Mar 13 00:56:47.378445 systemd-networkd[819]: eth0: Gained IPv6LL Mar 13 00:56:47.528609 ignition[823]: PUT http://169.254.169.254/v1/token: attempt #4 Mar 13 00:56:47.624803 ignition[823]: PUT result: OK Mar 13 00:56:47.624874 ignition[823]: GET http://169.254.169.254/v1/user-data: attempt #1 Mar 13 00:56:47.736196 ignition[823]: GET result: OK Mar 13 00:56:47.736274 ignition[823]: parsing config with SHA512: 3662a411821ad06f8b7f1bfbd94e314b4dc1840bb3d8065f232fc3203661c3fe5637025faa51312567291ada895fad1c9506265c64bf65b0f6a4ab9f5b1e9ff4 Mar 13 00:56:47.739282 unknown[823]: fetched base config from "system" Mar 13 00:56:47.739292 unknown[823]: fetched base config from "system" Mar 13 00:56:47.739761 ignition[823]: fetch: fetch complete Mar 13 00:56:47.739298 unknown[823]: fetched user config from "akamai" Mar 13 00:56:47.739767 ignition[823]: fetch: fetch passed Mar 13 00:56:47.739816 ignition[823]: Ignition finished successfully Mar 13 00:56:47.742713 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 13 00:56:47.746287 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 13 00:56:47.773003 ignition[831]: Ignition 2.22.0 Mar 13 00:56:47.773132 ignition[831]: Stage: kargs Mar 13 00:56:47.773252 ignition[831]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:56:47.773263 ignition[831]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 13 00:56:47.777331 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 13 00:56:47.773936 ignition[831]: kargs: kargs passed Mar 13 00:56:47.773973 ignition[831]: Ignition finished successfully Mar 13 00:56:47.780699 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 13 00:56:47.809772 ignition[837]: Ignition 2.22.0 Mar 13 00:56:47.809785 ignition[837]: Stage: disks Mar 13 00:56:47.809910 ignition[837]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:56:47.809922 ignition[837]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 13 00:56:47.810752 ignition[837]: disks: disks passed Mar 13 00:56:47.810792 ignition[837]: Ignition finished successfully Mar 13 00:56:47.813462 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 13 00:56:47.815485 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 13 00:56:47.816583 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 13 00:56:47.818142 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:56:47.820264 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:56:47.822246 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:56:47.824708 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 13 00:56:47.855871 systemd-fsck[845]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 13 00:56:47.859080 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 13 00:56:47.862549 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 13 00:56:47.988122 kernel: EXT4-fs (sda9): mounted filesystem 26348f72-0225-4c06-aedc-823e61beebc6 r/w with ordered data mode. Quota mode: none. Mar 13 00:56:47.989492 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 13 00:56:47.990899 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 13 00:56:47.993315 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:56:47.996293 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 13 00:56:47.998542 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 13 00:56:47.998597 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 13 00:56:47.998623 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:56:48.009621 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 13 00:56:48.011420 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 13 00:56:48.023823 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (853) Mar 13 00:56:48.023866 kernel: BTRFS info (device sda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:56:48.023880 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:56:48.035884 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 13 00:56:48.035909 kernel: BTRFS info (device sda6): turning on async discard Mar 13 00:56:48.035922 kernel: BTRFS info (device sda6): enabling free space tree Mar 13 00:56:48.040971 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:56:48.080437 initrd-setup-root[878]: cut: /sysroot/etc/passwd: No such file or directory Mar 13 00:56:48.085425 initrd-setup-root[885]: cut: /sysroot/etc/group: No such file or directory Mar 13 00:56:48.090696 initrd-setup-root[892]: cut: /sysroot/etc/shadow: No such file or directory Mar 13 00:56:48.094872 initrd-setup-root[899]: cut: /sysroot/etc/gshadow: No such file or directory Mar 13 00:56:48.185694 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 13 00:56:48.189199 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 13 00:56:48.192214 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 13 00:56:48.204415 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 13 00:56:48.209157 kernel: BTRFS info (device sda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:56:48.222530 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 13 00:56:48.237832 ignition[967]: INFO : Ignition 2.22.0 Mar 13 00:56:48.237832 ignition[967]: INFO : Stage: mount Mar 13 00:56:48.239435 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:56:48.239435 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 13 00:56:48.241252 ignition[967]: INFO : mount: mount passed Mar 13 00:56:48.241252 ignition[967]: INFO : Ignition finished successfully Mar 13 00:56:48.242949 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 13 00:56:48.245179 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 13 00:56:48.990754 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:56:49.018129 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (978) Mar 13 00:56:49.018167 kernel: BTRFS info (device sda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:56:49.021494 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:56:49.028530 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 13 00:56:49.028556 kernel: BTRFS info (device sda6): turning on async discard Mar 13 00:56:49.032364 kernel: BTRFS info (device sda6): enabling free space tree Mar 13 00:56:49.034606 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:56:49.067220 ignition[994]: INFO : Ignition 2.22.0 Mar 13 00:56:49.067220 ignition[994]: INFO : Stage: files Mar 13 00:56:49.069269 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:56:49.069269 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 13 00:56:49.069269 ignition[994]: DEBUG : files: compiled without relabeling support, skipping Mar 13 00:56:49.072696 ignition[994]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 13 00:56:49.072696 ignition[994]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 13 00:56:49.072696 ignition[994]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 13 00:56:49.072696 ignition[994]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 13 00:56:49.077647 ignition[994]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 13 00:56:49.074269 unknown[994]: wrote ssh authorized keys file for user: core Mar 13 00:56:49.084734 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:56:49.086146 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 13 00:56:49.398013 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 13 00:56:49.538313 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:56:49.538313 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 13 00:56:49.541196 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 13 00:56:49.541196 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:56:49.541196 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:56:49.541196 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:56:49.541196 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:56:49.541196 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:56:49.541196 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:56:49.548733 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:56:49.548733 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:56:49.548733 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:56:49.548733 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:56:49.548733 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:56:49.548733 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 13 00:56:49.913750 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 13 00:56:50.660681 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 13 00:56:50.660681 ignition[994]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 13 00:56:50.663507 ignition[994]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:56:50.664985 ignition[994]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:56:50.664985 ignition[994]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 13 00:56:50.664985 ignition[994]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 13 00:56:50.668610 ignition[994]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 13 00:56:50.668610 ignition[994]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 13 00:56:50.668610 ignition[994]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 13 00:56:50.668610 ignition[994]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Mar 13 00:56:50.668610 ignition[994]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Mar 13 00:56:50.668610 ignition[994]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:56:50.668610 ignition[994]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:56:50.668610 ignition[994]: INFO : files: files passed Mar 13 00:56:50.668610 ignition[994]: INFO : Ignition finished successfully Mar 13 00:56:50.669165 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 13 00:56:50.673226 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 13 00:56:50.677113 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 13 00:56:50.684653 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 13 00:56:50.684760 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 13 00:56:50.694643 initrd-setup-root-after-ignition[1025]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:56:50.695972 initrd-setup-root-after-ignition[1025]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:56:50.697235 initrd-setup-root-after-ignition[1029]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:56:50.697861 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:56:50.699313 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 13 00:56:50.701604 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 13 00:56:50.755137 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 13 00:56:50.755258 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 13 00:56:50.757128 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 13 00:56:50.758373 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 13 00:56:50.759942 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 13 00:56:50.760725 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 13 00:56:50.790723 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:56:50.792838 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 13 00:56:50.809510 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:56:50.810487 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:56:50.812354 systemd[1]: Stopped target timers.target - Timer Units. Mar 13 00:56:50.813951 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 13 00:56:50.814088 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:56:50.815801 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 13 00:56:50.816875 systemd[1]: Stopped target basic.target - Basic System. Mar 13 00:56:50.818416 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 13 00:56:50.819844 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:56:50.821296 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 13 00:56:50.822873 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:56:50.824473 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 13 00:56:50.826322 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:56:50.827892 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 13 00:56:50.829531 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 13 00:56:50.831068 systemd[1]: Stopped target swap.target - Swaps. Mar 13 00:56:50.832621 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 13 00:56:50.832760 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:56:50.834381 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:56:50.835424 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:56:50.836870 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 13 00:56:50.837338 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:56:50.838455 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 13 00:56:50.838548 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 13 00:56:50.840651 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 13 00:56:50.840804 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:56:50.841799 systemd[1]: ignition-files.service: Deactivated successfully. Mar 13 00:56:50.841933 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 13 00:56:50.845193 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 13 00:56:50.846322 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 13 00:56:50.846431 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:56:50.852263 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 13 00:56:50.853321 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 13 00:56:50.853476 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:56:50.856982 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 13 00:56:50.857143 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:56:50.868042 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 13 00:56:50.886990 ignition[1049]: INFO : Ignition 2.22.0 Mar 13 00:56:50.886990 ignition[1049]: INFO : Stage: umount Mar 13 00:56:50.886990 ignition[1049]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:56:50.886990 ignition[1049]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Mar 13 00:56:50.886990 ignition[1049]: INFO : umount: umount passed Mar 13 00:56:50.886990 ignition[1049]: INFO : Ignition finished successfully Mar 13 00:56:50.868160 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 13 00:56:50.889007 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 13 00:56:50.890885 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 13 00:56:50.896747 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 13 00:56:50.896806 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 13 00:56:50.898863 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 13 00:56:50.898913 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 13 00:56:50.901291 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 13 00:56:50.901435 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 13 00:56:50.903156 systemd[1]: Stopped target network.target - Network. Mar 13 00:56:50.904440 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 13 00:56:50.904492 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:56:50.907160 systemd[1]: Stopped target paths.target - Path Units. Mar 13 00:56:50.907801 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 13 00:56:50.909492 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:56:50.910299 systemd[1]: Stopped target slices.target - Slice Units. Mar 13 00:56:50.911656 systemd[1]: Stopped target sockets.target - Socket Units. Mar 13 00:56:50.913051 systemd[1]: iscsid.socket: Deactivated successfully. Mar 13 00:56:50.913118 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:56:50.914583 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 13 00:56:50.914625 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:56:50.916253 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 13 00:56:50.916307 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 13 00:56:50.917647 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 13 00:56:50.917693 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 13 00:56:50.919665 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 13 00:56:50.921402 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 13 00:56:50.924075 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 13 00:56:50.924676 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 13 00:56:50.924784 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 13 00:56:50.925888 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 13 00:56:50.925977 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 13 00:56:50.930735 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 13 00:56:50.930868 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 13 00:56:50.934866 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 13 00:56:50.935175 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 13 00:56:50.935293 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 13 00:56:50.937847 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 13 00:56:50.938657 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 13 00:56:50.939824 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 13 00:56:50.939867 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:56:50.941968 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 13 00:56:50.943522 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 13 00:56:50.943574 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:56:50.946313 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 00:56:50.946362 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:56:50.949079 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 13 00:56:50.949143 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 13 00:56:50.950874 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 13 00:56:50.950925 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:56:50.952160 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:56:50.955808 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 13 00:56:50.955872 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:56:50.968469 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 13 00:56:50.969320 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 13 00:56:50.973364 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 13 00:56:50.973532 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:56:50.975349 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 13 00:56:50.975417 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 13 00:56:50.976511 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 13 00:56:50.976553 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:56:50.978085 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 13 00:56:50.978164 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:56:50.980289 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 13 00:56:50.980337 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 13 00:56:50.981848 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 00:56:50.981894 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:56:50.985206 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 13 00:56:50.986171 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 13 00:56:50.986225 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:56:50.989042 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 13 00:56:50.989096 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:56:50.991508 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:56:50.991557 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:56:50.995744 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 13 00:56:50.995804 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 13 00:56:50.995851 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:56:50.998880 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 13 00:56:50.998975 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 13 00:56:51.002508 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 13 00:56:51.004378 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 13 00:56:51.020646 systemd[1]: Switching root. Mar 13 00:56:51.066574 systemd-journald[187]: Journal stopped Mar 13 00:56:52.304909 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Mar 13 00:56:52.304949 kernel: SELinux: policy capability network_peer_controls=1 Mar 13 00:56:52.304967 kernel: SELinux: policy capability open_perms=1 Mar 13 00:56:52.304984 kernel: SELinux: policy capability extended_socket_class=1 Mar 13 00:56:52.304997 kernel: SELinux: policy capability always_check_network=0 Mar 13 00:56:52.305243 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 13 00:56:52.305261 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 13 00:56:52.305276 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 13 00:56:52.305290 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 13 00:56:52.305304 kernel: SELinux: policy capability userspace_initial_context=0 Mar 13 00:56:52.305316 kernel: audit: type=1403 audit(1773363411.244:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 13 00:56:52.305330 systemd[1]: Successfully loaded SELinux policy in 75.071ms. Mar 13 00:56:52.305350 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.681ms. Mar 13 00:56:52.305367 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:56:52.305383 systemd[1]: Detected virtualization kvm. Mar 13 00:56:52.305397 systemd[1]: Detected architecture x86-64. Mar 13 00:56:52.305414 systemd[1]: Detected first boot. Mar 13 00:56:52.305429 systemd[1]: Initializing machine ID from random generator. Mar 13 00:56:52.305443 zram_generator::config[1097]: No configuration found. Mar 13 00:56:52.305459 kernel: Guest personality initialized and is inactive Mar 13 00:56:52.305473 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 13 00:56:52.305483 kernel: Initialized host personality Mar 13 00:56:52.305497 kernel: NET: Registered PF_VSOCK protocol family Mar 13 00:56:52.305512 systemd[1]: Populated /etc with preset unit settings. Mar 13 00:56:52.305534 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 13 00:56:52.305549 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 13 00:56:52.305563 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 13 00:56:52.305579 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 13 00:56:52.305594 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 13 00:56:52.305609 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 13 00:56:52.305625 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 13 00:56:52.305643 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 13 00:56:52.305659 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 13 00:56:52.305675 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 13 00:56:52.305690 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 13 00:56:52.305706 systemd[1]: Created slice user.slice - User and Session Slice. Mar 13 00:56:52.305721 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:56:52.305736 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:56:52.305749 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 13 00:56:52.305766 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 13 00:56:52.305786 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 13 00:56:52.305804 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:56:52.305819 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 13 00:56:52.305834 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:56:52.305849 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:56:52.305866 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 13 00:56:52.305884 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 13 00:56:52.305899 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 13 00:56:52.305914 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 13 00:56:52.305930 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:56:52.305945 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:56:52.305960 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:56:52.305976 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:56:52.305992 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 13 00:56:52.306008 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 13 00:56:52.306023 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 13 00:56:52.306039 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:56:52.306055 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:56:52.306070 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:56:52.306088 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 13 00:56:52.308199 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 13 00:56:52.308222 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 13 00:56:52.308240 systemd[1]: Mounting media.mount - External Media Directory... Mar 13 00:56:52.308255 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:56:52.308271 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 13 00:56:52.308287 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 13 00:56:52.308303 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 13 00:56:52.308323 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 13 00:56:52.308340 systemd[1]: Reached target machines.target - Containers. Mar 13 00:56:52.308356 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 13 00:56:52.308377 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:56:52.308391 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:56:52.308406 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 13 00:56:52.308419 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:56:52.308433 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:56:52.308448 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:56:52.308466 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 13 00:56:52.308482 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:56:52.308497 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 13 00:56:52.308512 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 13 00:56:52.308527 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 13 00:56:52.308543 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 13 00:56:52.308559 systemd[1]: Stopped systemd-fsck-usr.service. Mar 13 00:56:52.308579 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:56:52.308598 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:56:52.308613 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:56:52.308629 kernel: loop: module loaded Mar 13 00:56:52.308644 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:56:52.308660 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 13 00:56:52.308675 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 13 00:56:52.308686 kernel: ACPI: bus type drm_connector registered Mar 13 00:56:52.308701 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:56:52.308718 systemd[1]: verity-setup.service: Deactivated successfully. Mar 13 00:56:52.308734 systemd[1]: Stopped verity-setup.service. Mar 13 00:56:52.308749 kernel: fuse: init (API version 7.41) Mar 13 00:56:52.308765 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:56:52.308816 systemd-journald[1174]: Collecting audit messages is disabled. Mar 13 00:56:52.309043 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 13 00:56:52.309058 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 13 00:56:52.309075 systemd[1]: Mounted media.mount - External Media Directory. Mar 13 00:56:52.309090 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 13 00:56:52.309124 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 13 00:56:52.309140 systemd-journald[1174]: Journal started Mar 13 00:56:52.309173 systemd-journald[1174]: Runtime Journal (/run/log/journal/7182c9bd22444a5ebace602bdd0375b4) is 8M, max 78.2M, 70.2M free. Mar 13 00:56:51.885568 systemd[1]: Queued start job for default target multi-user.target. Mar 13 00:56:51.911937 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 13 00:56:51.912459 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 13 00:56:52.312149 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:56:52.313855 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 13 00:56:52.314940 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 13 00:56:52.316092 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:56:52.317236 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 13 00:56:52.317438 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 13 00:56:52.318613 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:56:52.318870 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:56:52.320363 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:56:52.320627 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:56:52.321682 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:56:52.321877 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:56:52.323094 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 13 00:56:52.323391 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 13 00:56:52.324610 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:56:52.324882 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:56:52.326012 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:56:52.327416 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:56:52.328534 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 13 00:56:52.329811 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 13 00:56:52.342298 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:56:52.347188 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 13 00:56:52.348950 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 13 00:56:52.350634 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 13 00:56:52.350725 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:56:52.353860 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 13 00:56:52.360219 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 13 00:56:52.361195 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:56:52.365308 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 13 00:56:52.368465 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 13 00:56:52.370180 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:56:52.372875 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 13 00:56:52.373659 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:56:52.379447 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:56:52.383210 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 13 00:56:52.398339 systemd-journald[1174]: Time spent on flushing to /var/log/journal/7182c9bd22444a5ebace602bdd0375b4 is 107.081ms for 1004 entries. Mar 13 00:56:52.398339 systemd-journald[1174]: System Journal (/var/log/journal/7182c9bd22444a5ebace602bdd0375b4) is 8M, max 195.6M, 187.6M free. Mar 13 00:56:52.532671 systemd-journald[1174]: Received client request to flush runtime journal. Mar 13 00:56:52.532706 kernel: loop0: detected capacity change from 0 to 128560 Mar 13 00:56:52.532721 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 13 00:56:52.388471 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 13 00:56:52.393766 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 13 00:56:52.395715 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 13 00:56:52.431472 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 13 00:56:52.432385 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 13 00:56:52.436807 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 13 00:56:52.439731 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:56:52.506212 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 13 00:56:52.511651 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:56:52.536282 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 13 00:56:52.548066 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 13 00:56:52.554251 kernel: loop1: detected capacity change from 0 to 219192 Mar 13 00:56:52.554547 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:56:52.603296 kernel: loop2: detected capacity change from 0 to 110984 Mar 13 00:56:52.615347 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Mar 13 00:56:52.615363 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Mar 13 00:56:52.622383 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:56:52.646134 kernel: loop3: detected capacity change from 0 to 8 Mar 13 00:56:52.666145 kernel: loop4: detected capacity change from 0 to 128560 Mar 13 00:56:52.684660 kernel: loop5: detected capacity change from 0 to 219192 Mar 13 00:56:52.709127 kernel: loop6: detected capacity change from 0 to 110984 Mar 13 00:56:52.725147 kernel: loop7: detected capacity change from 0 to 8 Mar 13 00:56:52.725519 (sd-merge)[1245]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Mar 13 00:56:52.726580 (sd-merge)[1245]: Merged extensions into '/usr'. Mar 13 00:56:52.731219 systemd[1]: Reload requested from client PID 1218 ('systemd-sysext') (unit systemd-sysext.service)... Mar 13 00:56:52.731294 systemd[1]: Reloading... Mar 13 00:56:52.843327 zram_generator::config[1271]: No configuration found. Mar 13 00:56:52.911820 ldconfig[1213]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 13 00:56:53.056380 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 13 00:56:53.056560 systemd[1]: Reloading finished in 324 ms. Mar 13 00:56:53.090365 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 13 00:56:53.091614 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 13 00:56:53.092757 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 13 00:56:53.104319 systemd[1]: Starting ensure-sysext.service... Mar 13 00:56:53.108212 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:56:53.110618 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:56:53.127577 systemd[1]: Reload requested from client PID 1315 ('systemctl') (unit ensure-sysext.service)... Mar 13 00:56:53.127594 systemd[1]: Reloading... Mar 13 00:56:53.137871 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 13 00:56:53.138179 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 13 00:56:53.138739 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 13 00:56:53.139064 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 13 00:56:53.140354 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 13 00:56:53.140723 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Mar 13 00:56:53.140862 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Mar 13 00:56:53.148725 systemd-tmpfiles[1316]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:56:53.148844 systemd-tmpfiles[1316]: Skipping /boot Mar 13 00:56:53.152689 systemd-udevd[1317]: Using default interface naming scheme 'v255'. Mar 13 00:56:53.168380 systemd-tmpfiles[1316]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:56:53.168395 systemd-tmpfiles[1316]: Skipping /boot Mar 13 00:56:53.242132 zram_generator::config[1341]: No configuration found. Mar 13 00:56:53.442121 kernel: mousedev: PS/2 mouse device common for all mice Mar 13 00:56:53.450117 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Mar 13 00:56:53.480131 kernel: ACPI: button: Power Button [PWRF] Mar 13 00:56:53.527652 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 13 00:56:53.527738 systemd[1]: Reloading finished in 399 ms. Mar 13 00:56:53.537632 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:56:53.538817 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:56:53.557187 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 13 00:56:53.557451 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 13 00:56:53.582439 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:56:53.586296 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:56:53.590354 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 13 00:56:53.592282 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:56:53.593431 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:56:53.598729 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:56:53.604172 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:56:53.606285 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:56:53.606389 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:56:53.611330 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 13 00:56:53.617361 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:56:53.625507 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:56:53.630375 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 13 00:56:53.631302 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:56:53.633984 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:56:53.640694 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:56:53.642941 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:56:53.644418 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:56:53.668821 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:56:53.669360 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:56:53.671992 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:56:53.677416 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:56:53.678388 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:56:53.678477 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:56:53.678553 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:56:53.679661 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 13 00:56:53.683055 kernel: EDAC MC: Ver: 3.0.0 Mar 13 00:56:53.682756 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:56:53.683614 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:56:53.709701 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:56:53.710049 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:56:53.712977 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:56:53.719378 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:56:53.721273 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:56:53.721374 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:56:53.725410 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 13 00:56:53.727298 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:56:53.734691 systemd[1]: Finished ensure-sysext.service. Mar 13 00:56:53.740062 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 13 00:56:53.742206 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 13 00:56:53.749328 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 13 00:56:53.753071 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:56:53.753329 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:56:53.755552 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:56:53.755781 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:56:53.757854 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:56:53.787734 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:56:53.789150 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:56:53.790301 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:56:53.790509 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:56:53.804587 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:56:53.810824 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:56:53.821061 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 13 00:56:53.836450 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 13 00:56:53.841534 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 13 00:56:53.847502 augenrules[1489]: No rules Mar 13 00:56:53.849426 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 13 00:56:53.851729 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:56:53.851983 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:56:53.854194 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 13 00:56:53.867812 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 13 00:56:53.918282 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 13 00:56:54.014487 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:56:54.039813 systemd-networkd[1442]: lo: Link UP Mar 13 00:56:54.040063 systemd-networkd[1442]: lo: Gained carrier Mar 13 00:56:54.041965 systemd-networkd[1442]: Enumeration completed Mar 13 00:56:54.042047 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:56:54.044497 systemd-networkd[1442]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:56:54.044812 systemd-networkd[1442]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:56:54.045617 systemd-networkd[1442]: eth0: Link UP Mar 13 00:56:54.045868 systemd-networkd[1442]: eth0: Gained carrier Mar 13 00:56:54.045928 systemd-networkd[1442]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:56:54.046574 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 13 00:56:54.054873 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 13 00:56:54.068419 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 13 00:56:54.069861 systemd[1]: Reached target time-set.target - System Time Set. Mar 13 00:56:54.075610 systemd-resolved[1444]: Positive Trust Anchors: Mar 13 00:56:54.075851 systemd-resolved[1444]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:56:54.075928 systemd-resolved[1444]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:56:54.079302 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 13 00:56:54.080844 systemd-resolved[1444]: Defaulting to hostname 'linux'. Mar 13 00:56:54.083144 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:56:54.083915 systemd[1]: Reached target network.target - Network. Mar 13 00:56:54.084651 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:56:54.085421 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:56:54.086250 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 13 00:56:54.087029 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 13 00:56:54.087807 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 13 00:56:54.088722 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 13 00:56:54.089576 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 13 00:56:54.090341 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 13 00:56:54.091085 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 13 00:56:54.091136 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:56:54.091799 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:56:54.093315 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 13 00:56:54.095515 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 13 00:56:54.098534 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 13 00:56:54.121128 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 13 00:56:54.121925 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 13 00:56:54.132721 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 13 00:56:54.133889 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 13 00:56:54.135290 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 13 00:56:54.136794 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:56:54.137570 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:56:54.138392 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:56:54.138486 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:56:54.139627 systemd[1]: Starting containerd.service - containerd container runtime... Mar 13 00:56:54.143236 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 13 00:56:54.148254 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 13 00:56:54.150445 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 13 00:56:54.153292 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 13 00:56:54.157270 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 13 00:56:54.158586 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 13 00:56:54.162115 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 13 00:56:54.171122 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 13 00:56:54.173185 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 13 00:56:54.180358 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 13 00:56:54.187385 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 13 00:56:54.195288 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 13 00:56:54.197861 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 13 00:56:54.198389 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 13 00:56:54.199377 systemd[1]: Starting update-engine.service - Update Engine... Mar 13 00:56:54.203219 jq[1520]: false Mar 13 00:56:54.203550 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 13 00:56:54.214004 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 13 00:56:54.216129 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 13 00:56:54.216399 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 13 00:56:54.224581 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Refreshing passwd entry cache Mar 13 00:56:54.226166 oslogin_cache_refresh[1522]: Refreshing passwd entry cache Mar 13 00:56:54.228690 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 13 00:56:54.229964 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 13 00:56:54.242154 jq[1533]: true Mar 13 00:56:54.243548 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Failure getting users, quitting Mar 13 00:56:54.243548 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:56:54.243548 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Refreshing group entry cache Mar 13 00:56:54.242880 oslogin_cache_refresh[1522]: Failure getting users, quitting Mar 13 00:56:54.242897 oslogin_cache_refresh[1522]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:56:54.242936 oslogin_cache_refresh[1522]: Refreshing group entry cache Mar 13 00:56:54.247155 extend-filesystems[1521]: Found /dev/sda6 Mar 13 00:56:54.252367 extend-filesystems[1521]: Found /dev/sda9 Mar 13 00:56:54.252367 extend-filesystems[1521]: Checking size of /dev/sda9 Mar 13 00:56:54.248914 oslogin_cache_refresh[1522]: Failure getting groups, quitting Mar 13 00:56:54.264707 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Failure getting groups, quitting Mar 13 00:56:54.264707 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:56:54.264754 update_engine[1531]: I20260313 00:56:54.249302 1531 main.cc:92] Flatcar Update Engine starting Mar 13 00:56:54.254935 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 13 00:56:54.248925 oslogin_cache_refresh[1522]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:56:54.270447 coreos-metadata[1517]: Mar 13 00:56:54.266 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Mar 13 00:56:54.262615 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 13 00:56:54.284173 jq[1553]: true Mar 13 00:56:54.278803 (ntainerd)[1550]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 13 00:56:54.293340 systemd[1]: motdgen.service: Deactivated successfully. Mar 13 00:56:54.293606 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 13 00:56:54.296963 tar[1539]: linux-amd64/LICENSE Mar 13 00:56:54.301127 tar[1539]: linux-amd64/helm Mar 13 00:56:54.301338 dbus-daemon[1518]: [system] SELinux support is enabled Mar 13 00:56:54.301477 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 13 00:56:54.302232 extend-filesystems[1521]: Resized partition /dev/sda9 Mar 13 00:56:54.305429 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 13 00:56:54.305460 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 13 00:56:54.307222 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 13 00:56:54.307237 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 13 00:56:54.311601 extend-filesystems[1567]: resize2fs 1.47.3 (8-Jul-2025) Mar 13 00:56:54.331978 systemd[1]: Started update-engine.service - Update Engine. Mar 13 00:56:54.337997 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Mar 13 00:56:54.339570 update_engine[1531]: I20260313 00:56:54.339519 1531 update_check_scheduler.cc:74] Next update check in 8m30s Mar 13 00:56:54.346341 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 13 00:56:54.372265 systemd-logind[1530]: Watching system buttons on /dev/input/event2 (Power Button) Mar 13 00:56:54.372549 systemd-logind[1530]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 13 00:56:54.372833 systemd-logind[1530]: New seat seat0. Mar 13 00:56:54.377298 systemd[1]: Started systemd-logind.service - User Login Management. Mar 13 00:56:54.460929 bash[1584]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:56:54.464479 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 13 00:56:54.469850 systemd[1]: Starting sshkeys.service... Mar 13 00:56:54.517975 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 13 00:56:54.522199 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 13 00:56:54.637209 containerd[1550]: time="2026-03-13T00:56:54Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 13 00:56:54.638957 containerd[1550]: time="2026-03-13T00:56:54.638936240Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 13 00:56:54.657506 coreos-metadata[1593]: Mar 13 00:56:54.657 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Mar 13 00:56:54.661008 containerd[1550]: time="2026-03-13T00:56:54.660873400Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.61µs" Mar 13 00:56:54.661008 containerd[1550]: time="2026-03-13T00:56:54.660899690Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 13 00:56:54.661008 containerd[1550]: time="2026-03-13T00:56:54.660919310Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 13 00:56:54.661255 containerd[1550]: time="2026-03-13T00:56:54.661236700Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 13 00:56:54.661981 containerd[1550]: time="2026-03-13T00:56:54.661483750Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 13 00:56:54.661981 containerd[1550]: time="2026-03-13T00:56:54.661519260Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:56:54.662148 containerd[1550]: time="2026-03-13T00:56:54.661790690Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:56:54.662202 containerd[1550]: time="2026-03-13T00:56:54.662187160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:56:54.663590 containerd[1550]: time="2026-03-13T00:56:54.663536560Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:56:54.664277 containerd[1550]: time="2026-03-13T00:56:54.663555600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:56:54.664534 containerd[1550]: time="2026-03-13T00:56:54.664344580Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:56:54.665197 containerd[1550]: time="2026-03-13T00:56:54.664781470Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 13 00:56:54.665197 containerd[1550]: time="2026-03-13T00:56:54.664880740Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 13 00:56:54.666858 containerd[1550]: time="2026-03-13T00:56:54.666445500Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:56:54.666858 containerd[1550]: time="2026-03-13T00:56:54.666480790Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:56:54.666858 containerd[1550]: time="2026-03-13T00:56:54.666490250Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 13 00:56:54.666858 containerd[1550]: time="2026-03-13T00:56:54.666513560Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 13 00:56:54.666858 containerd[1550]: time="2026-03-13T00:56:54.666661970Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 13 00:56:54.666858 containerd[1550]: time="2026-03-13T00:56:54.666722480Z" level=info msg="metadata content store policy set" policy=shared Mar 13 00:56:54.670662 locksmithd[1572]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 13 00:56:54.686042 containerd[1550]: time="2026-03-13T00:56:54.686009800Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 13 00:56:54.686317 containerd[1550]: time="2026-03-13T00:56:54.686298490Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 13 00:56:54.686623 containerd[1550]: time="2026-03-13T00:56:54.686536100Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 13 00:56:54.686871 containerd[1550]: time="2026-03-13T00:56:54.686692020Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 13 00:56:54.686959 containerd[1550]: time="2026-03-13T00:56:54.686944210Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 13 00:56:54.687050 containerd[1550]: time="2026-03-13T00:56:54.687035930Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 13 00:56:54.687129 containerd[1550]: time="2026-03-13T00:56:54.687116840Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 13 00:56:54.687302 containerd[1550]: time="2026-03-13T00:56:54.687287990Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 13 00:56:54.687406 containerd[1550]: time="2026-03-13T00:56:54.687391200Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 13 00:56:54.687683 containerd[1550]: time="2026-03-13T00:56:54.687541950Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 13 00:56:54.687683 containerd[1550]: time="2026-03-13T00:56:54.687559260Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 13 00:56:54.687683 containerd[1550]: time="2026-03-13T00:56:54.687577650Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 13 00:56:54.690142 containerd[1550]: time="2026-03-13T00:56:54.689235310Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 13 00:56:54.690142 containerd[1550]: time="2026-03-13T00:56:54.689259370Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 13 00:56:54.690142 containerd[1550]: time="2026-03-13T00:56:54.689293290Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 13 00:56:54.690142 containerd[1550]: time="2026-03-13T00:56:54.689303880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 13 00:56:54.690142 containerd[1550]: time="2026-03-13T00:56:54.689313540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 13 00:56:54.690142 containerd[1550]: time="2026-03-13T00:56:54.689333190Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 13 00:56:54.690142 containerd[1550]: time="2026-03-13T00:56:54.689362660Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 13 00:56:54.690142 containerd[1550]: time="2026-03-13T00:56:54.689373660Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 13 00:56:54.690142 containerd[1550]: time="2026-03-13T00:56:54.689387060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 13 00:56:54.690142 containerd[1550]: time="2026-03-13T00:56:54.689396260Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 13 00:56:54.690142 containerd[1550]: time="2026-03-13T00:56:54.689405140Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 13 00:56:54.690142 containerd[1550]: time="2026-03-13T00:56:54.689461340Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 13 00:56:54.690142 containerd[1550]: time="2026-03-13T00:56:54.689473460Z" level=info msg="Start snapshots syncer" Mar 13 00:56:54.690142 containerd[1550]: time="2026-03-13T00:56:54.689496600Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 13 00:56:54.690401 containerd[1550]: time="2026-03-13T00:56:54.689991750Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 13 00:56:54.690401 containerd[1550]: time="2026-03-13T00:56:54.690031000Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 13 00:56:54.690962 containerd[1550]: time="2026-03-13T00:56:54.690082160Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 13 00:56:54.690962 containerd[1550]: time="2026-03-13T00:56:54.690735590Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 13 00:56:54.690962 containerd[1550]: time="2026-03-13T00:56:54.690757510Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 13 00:56:54.690962 containerd[1550]: time="2026-03-13T00:56:54.690767750Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 13 00:56:54.690962 containerd[1550]: time="2026-03-13T00:56:54.690777430Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 13 00:56:54.690962 containerd[1550]: time="2026-03-13T00:56:54.690809370Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 13 00:56:54.690962 containerd[1550]: time="2026-03-13T00:56:54.690822240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 13 00:56:54.690962 containerd[1550]: time="2026-03-13T00:56:54.690887590Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 13 00:56:54.690962 containerd[1550]: time="2026-03-13T00:56:54.690911650Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 13 00:56:54.690962 containerd[1550]: time="2026-03-13T00:56:54.690921490Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 13 00:56:54.690962 containerd[1550]: time="2026-03-13T00:56:54.690931680Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 13 00:56:54.691873 containerd[1550]: time="2026-03-13T00:56:54.691343200Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:56:54.691873 containerd[1550]: time="2026-03-13T00:56:54.691361740Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:56:54.691873 containerd[1550]: time="2026-03-13T00:56:54.691369660Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:56:54.691873 containerd[1550]: time="2026-03-13T00:56:54.691378100Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:56:54.691873 containerd[1550]: time="2026-03-13T00:56:54.691385410Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 13 00:56:54.691873 containerd[1550]: time="2026-03-13T00:56:54.691631080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 13 00:56:54.691873 containerd[1550]: time="2026-03-13T00:56:54.691652960Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 13 00:56:54.691873 containerd[1550]: time="2026-03-13T00:56:54.691668880Z" level=info msg="runtime interface created" Mar 13 00:56:54.691873 containerd[1550]: time="2026-03-13T00:56:54.691674400Z" level=info msg="created NRI interface" Mar 13 00:56:54.692741 containerd[1550]: time="2026-03-13T00:56:54.691681720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 13 00:56:54.692741 containerd[1550]: time="2026-03-13T00:56:54.691935510Z" level=info msg="Connect containerd service" Mar 13 00:56:54.692741 containerd[1550]: time="2026-03-13T00:56:54.692646440Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 13 00:56:54.696479 containerd[1550]: time="2026-03-13T00:56:54.695938790Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 00:56:54.712395 sshd_keygen[1560]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 13 00:56:54.733119 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Mar 13 00:56:54.747421 extend-filesystems[1567]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 13 00:56:54.747421 extend-filesystems[1567]: old_desc_blocks = 1, new_desc_blocks = 10 Mar 13 00:56:54.747421 extend-filesystems[1567]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Mar 13 00:56:54.764763 extend-filesystems[1521]: Resized filesystem in /dev/sda9 Mar 13 00:56:54.762631 dbus-daemon[1518]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1442 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 13 00:56:54.748841 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 13 00:56:54.749132 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 13 00:56:54.753617 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 13 00:56:54.760916 systemd-networkd[1442]: eth0: DHCPv4 address 172.236.110.174/24, gateway 172.236.110.1 acquired from 23.205.167.177 Mar 13 00:56:54.761706 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 13 00:56:54.762321 systemd-timesyncd[1473]: Network configuration changed, trying to establish connection. Mar 13 00:56:54.778374 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 13 00:56:54.802735 systemd[1]: issuegen.service: Deactivated successfully. Mar 13 00:56:54.804502 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 13 00:56:54.813219 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 13 00:56:54.831020 containerd[1550]: time="2026-03-13T00:56:54.830976060Z" level=info msg="Start subscribing containerd event" Mar 13 00:56:54.831342 containerd[1550]: time="2026-03-13T00:56:54.831303820Z" level=info msg="Start recovering state" Mar 13 00:56:54.831527 containerd[1550]: time="2026-03-13T00:56:54.831505020Z" level=info msg="Start event monitor" Mar 13 00:56:54.831743 containerd[1550]: time="2026-03-13T00:56:54.831714230Z" level=info msg="Start cni network conf syncer for default" Mar 13 00:56:54.831843 containerd[1550]: time="2026-03-13T00:56:54.831827300Z" level=info msg="Start streaming server" Mar 13 00:56:54.831870 containerd[1550]: time="2026-03-13T00:56:54.831847360Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 13 00:56:54.831870 containerd[1550]: time="2026-03-13T00:56:54.831855310Z" level=info msg="runtime interface starting up..." Mar 13 00:56:54.831870 containerd[1550]: time="2026-03-13T00:56:54.831860820Z" level=info msg="starting plugins..." Mar 13 00:56:54.831932 containerd[1550]: time="2026-03-13T00:56:54.831875730Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 13 00:56:54.834118 containerd[1550]: time="2026-03-13T00:56:54.832746330Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 13 00:56:54.834118 containerd[1550]: time="2026-03-13T00:56:54.832800690Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 13 00:56:54.833825 systemd[1]: Started containerd.service - containerd container runtime. Mar 13 00:56:54.834605 containerd[1550]: time="2026-03-13T00:56:54.834582150Z" level=info msg="containerd successfully booted in 0.200648s" Mar 13 00:56:54.842671 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 13 00:56:54.848499 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 13 00:56:54.851248 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 13 00:56:54.852655 systemd[1]: Reached target getty.target - Login Prompts. Mar 13 00:56:54.870941 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 13 00:56:54.872065 dbus-daemon[1518]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 13 00:56:54.873563 dbus-daemon[1518]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1622 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 13 00:56:54.879322 systemd[1]: Starting polkit.service - Authorization Manager... Mar 13 00:56:54.954495 polkitd[1635]: Started polkitd version 126 Mar 13 00:56:54.958851 polkitd[1635]: Loading rules from directory /etc/polkit-1/rules.d Mar 13 00:56:54.959354 polkitd[1635]: Loading rules from directory /run/polkit-1/rules.d Mar 13 00:56:54.959405 polkitd[1635]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 13 00:56:54.959594 polkitd[1635]: Loading rules from directory /usr/local/share/polkit-1/rules.d Mar 13 00:56:54.959621 polkitd[1635]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 13 00:56:54.959653 polkitd[1635]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 13 00:56:54.960313 polkitd[1635]: Finished loading, compiling and executing 2 rules Mar 13 00:56:54.960524 systemd[1]: Started polkit.service - Authorization Manager. Mar 13 00:56:54.961654 dbus-daemon[1518]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 13 00:56:54.962673 polkitd[1635]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 13 00:56:54.970337 systemd-resolved[1444]: System hostname changed to '172-236-110-174'. Mar 13 00:56:54.970422 systemd-hostnamed[1622]: Hostname set to <172-236-110-174> (transient) Mar 13 00:56:55.001855 tar[1539]: linux-amd64/README.md Mar 13 00:56:55.019657 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 13 00:56:55.277597 coreos-metadata[1517]: Mar 13 00:56:55.277 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Mar 13 00:56:55.366775 coreos-metadata[1517]: Mar 13 00:56:55.366 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Mar 13 00:56:55.595675 coreos-metadata[1517]: Mar 13 00:56:55.595 INFO Fetch successful Mar 13 00:56:55.595675 coreos-metadata[1517]: Mar 13 00:56:55.595 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Mar 13 00:56:55.667330 coreos-metadata[1593]: Mar 13 00:56:55.667 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Mar 13 00:56:55.761009 coreos-metadata[1593]: Mar 13 00:56:55.760 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Mar 13 00:56:55.850832 coreos-metadata[1517]: Mar 13 00:56:55.850 INFO Fetch successful Mar 13 00:56:55.898770 coreos-metadata[1593]: Mar 13 00:56:55.898 INFO Fetch successful Mar 13 00:56:55.924174 update-ssh-keys[1660]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:56:55.923589 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 13 00:56:55.927883 systemd[1]: Finished sshkeys.service. Mar 13 00:56:55.971075 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 13 00:56:55.973397 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 13 00:56:56.018345 systemd-networkd[1442]: eth0: Gained IPv6LL Mar 13 00:56:56.021847 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 13 00:56:56.023196 systemd[1]: Reached target network-online.target - Network is Online. Mar 13 00:56:56.026050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:56:57.270425 systemd-timesyncd[1473]: Contacted time server 129.250.35.251:123 (0.flatcar.pool.ntp.org). Mar 13 00:56:57.270479 systemd-timesyncd[1473]: Initial clock synchronization to Fri 2026-03-13 00:56:57.268408 UTC. Mar 13 00:56:57.270520 systemd-resolved[1444]: Clock change detected. Flushing caches. Mar 13 00:56:57.272485 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 13 00:56:57.303979 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 13 00:56:58.147739 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:56:58.149546 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 13 00:56:58.204509 systemd[1]: Startup finished in 3.103s (kernel) + 8.575s (initrd) + 5.792s (userspace) = 17.471s. Mar 13 00:56:58.214052 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:56:58.531114 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 13 00:56:58.532666 systemd[1]: Started sshd@0-172.236.110.174:22-68.220.241.50:49852.service - OpenSSH per-connection server daemon (68.220.241.50:49852). Mar 13 00:56:58.687052 sshd[1701]: Accepted publickey for core from 68.220.241.50 port 49852 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:56:58.691158 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:56:58.699011 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 13 00:56:58.701674 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 13 00:56:58.704746 kubelet[1691]: E0313 00:56:58.704698 1691 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:56:58.710762 systemd-logind[1530]: New session 1 of user core. Mar 13 00:56:58.711796 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:56:58.711982 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:56:58.718009 systemd[1]: kubelet.service: Consumed 837ms CPU time, 257.9M memory peak. Mar 13 00:56:58.729428 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 13 00:56:58.733008 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 13 00:56:58.744264 (systemd)[1708]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 13 00:56:58.747622 systemd-logind[1530]: New session c1 of user core. Mar 13 00:56:58.882012 systemd[1708]: Queued start job for default target default.target. Mar 13 00:56:58.888628 systemd[1708]: Created slice app.slice - User Application Slice. Mar 13 00:56:58.888658 systemd[1708]: Reached target paths.target - Paths. Mar 13 00:56:58.888705 systemd[1708]: Reached target timers.target - Timers. Mar 13 00:56:58.890392 systemd[1708]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 13 00:56:58.904414 systemd[1708]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 13 00:56:58.904548 systemd[1708]: Reached target sockets.target - Sockets. Mar 13 00:56:58.904592 systemd[1708]: Reached target basic.target - Basic System. Mar 13 00:56:58.904640 systemd[1708]: Reached target default.target - Main User Target. Mar 13 00:56:58.904678 systemd[1708]: Startup finished in 148ms. Mar 13 00:56:58.905054 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 13 00:56:58.912571 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 13 00:56:59.003629 systemd[1]: Started sshd@1-172.236.110.174:22-68.220.241.50:49864.service - OpenSSH per-connection server daemon (68.220.241.50:49864). Mar 13 00:56:59.180002 sshd[1719]: Accepted publickey for core from 68.220.241.50 port 49864 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:56:59.181916 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:56:59.188614 systemd-logind[1530]: New session 2 of user core. Mar 13 00:56:59.194415 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 13 00:56:59.253783 sshd[1722]: Connection closed by 68.220.241.50 port 49864 Mar 13 00:56:59.255452 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Mar 13 00:56:59.259203 systemd[1]: sshd@1-172.236.110.174:22-68.220.241.50:49864.service: Deactivated successfully. Mar 13 00:56:59.261237 systemd[1]: session-2.scope: Deactivated successfully. Mar 13 00:56:59.262417 systemd-logind[1530]: Session 2 logged out. Waiting for processes to exit. Mar 13 00:56:59.264503 systemd-logind[1530]: Removed session 2. Mar 13 00:56:59.283744 systemd[1]: Started sshd@2-172.236.110.174:22-68.220.241.50:49876.service - OpenSSH per-connection server daemon (68.220.241.50:49876). Mar 13 00:56:59.424311 sshd[1728]: Accepted publickey for core from 68.220.241.50 port 49876 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:56:59.426397 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:56:59.430762 systemd-logind[1530]: New session 3 of user core. Mar 13 00:56:59.441407 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 13 00:56:59.487838 sshd[1731]: Connection closed by 68.220.241.50 port 49876 Mar 13 00:56:59.489432 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Mar 13 00:56:59.493767 systemd-logind[1530]: Session 3 logged out. Waiting for processes to exit. Mar 13 00:56:59.493964 systemd[1]: sshd@2-172.236.110.174:22-68.220.241.50:49876.service: Deactivated successfully. Mar 13 00:56:59.496217 systemd[1]: session-3.scope: Deactivated successfully. Mar 13 00:56:59.497759 systemd-logind[1530]: Removed session 3. Mar 13 00:56:59.515385 systemd[1]: Started sshd@3-172.236.110.174:22-68.220.241.50:49882.service - OpenSSH per-connection server daemon (68.220.241.50:49882). Mar 13 00:56:59.654604 sshd[1737]: Accepted publickey for core from 68.220.241.50 port 49882 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:56:59.656595 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:56:59.662465 systemd-logind[1530]: New session 4 of user core. Mar 13 00:56:59.668633 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 13 00:56:59.725208 sshd[1740]: Connection closed by 68.220.241.50 port 49882 Mar 13 00:56:59.726551 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Mar 13 00:56:59.731764 systemd[1]: sshd@3-172.236.110.174:22-68.220.241.50:49882.service: Deactivated successfully. Mar 13 00:56:59.734992 systemd[1]: session-4.scope: Deactivated successfully. Mar 13 00:56:59.736585 systemd-logind[1530]: Session 4 logged out. Waiting for processes to exit. Mar 13 00:56:59.737748 systemd-logind[1530]: Removed session 4. Mar 13 00:56:59.779477 systemd[1]: Started sshd@4-172.236.110.174:22-68.220.241.50:49894.service - OpenSSH per-connection server daemon (68.220.241.50:49894). Mar 13 00:56:59.933489 sshd[1746]: Accepted publickey for core from 68.220.241.50 port 49894 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:56:59.935591 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:56:59.943942 systemd-logind[1530]: New session 5 of user core. Mar 13 00:56:59.950661 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 13 00:56:59.998378 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 13 00:56:59.999066 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:57:00.348933 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 13 00:57:00.361664 (dockerd)[1768]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 13 00:57:00.589117 dockerd[1768]: time="2026-03-13T00:57:00.589035660Z" level=info msg="Starting up" Mar 13 00:57:00.593758 dockerd[1768]: time="2026-03-13T00:57:00.593729400Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 13 00:57:00.609102 dockerd[1768]: time="2026-03-13T00:57:00.608935240Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 13 00:57:00.702480 dockerd[1768]: time="2026-03-13T00:57:00.702370980Z" level=info msg="Loading containers: start." Mar 13 00:57:00.715298 kernel: Initializing XFRM netlink socket Mar 13 00:57:00.986456 systemd-networkd[1442]: docker0: Link UP Mar 13 00:57:00.990078 dockerd[1768]: time="2026-03-13T00:57:00.990021830Z" level=info msg="Loading containers: done." Mar 13 00:57:01.005770 dockerd[1768]: time="2026-03-13T00:57:01.005521740Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 13 00:57:01.005929 dockerd[1768]: time="2026-03-13T00:57:01.005810050Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 13 00:57:01.005929 dockerd[1768]: time="2026-03-13T00:57:01.005914660Z" level=info msg="Initializing buildkit" Mar 13 00:57:01.035740 dockerd[1768]: time="2026-03-13T00:57:01.035691970Z" level=info msg="Completed buildkit initialization" Mar 13 00:57:01.044837 dockerd[1768]: time="2026-03-13T00:57:01.043850740Z" level=info msg="Daemon has completed initialization" Mar 13 00:57:01.044837 dockerd[1768]: time="2026-03-13T00:57:01.043908000Z" level=info msg="API listen on /run/docker.sock" Mar 13 00:57:01.045595 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 13 00:57:01.597520 containerd[1550]: time="2026-03-13T00:57:01.597463660Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 13 00:57:02.198728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount906675001.mount: Deactivated successfully. Mar 13 00:57:03.271048 containerd[1550]: time="2026-03-13T00:57:03.270223440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:03.273655 containerd[1550]: time="2026-03-13T00:57:03.273123290Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074503" Mar 13 00:57:03.273763 containerd[1550]: time="2026-03-13T00:57:03.273732380Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:03.276748 containerd[1550]: time="2026-03-13T00:57:03.276705650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:03.278486 containerd[1550]: time="2026-03-13T00:57:03.277498180Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 1.67996658s" Mar 13 00:57:03.278486 containerd[1550]: time="2026-03-13T00:57:03.277794230Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 13 00:57:03.280012 containerd[1550]: time="2026-03-13T00:57:03.279986480Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 13 00:57:04.449210 containerd[1550]: time="2026-03-13T00:57:04.449150980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:04.450194 containerd[1550]: time="2026-03-13T00:57:04.449970090Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165829" Mar 13 00:57:04.450920 containerd[1550]: time="2026-03-13T00:57:04.450884440Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:04.452958 containerd[1550]: time="2026-03-13T00:57:04.452920660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:04.453969 containerd[1550]: time="2026-03-13T00:57:04.453945970Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 1.17392658s" Mar 13 00:57:04.454049 containerd[1550]: time="2026-03-13T00:57:04.454035160Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 13 00:57:04.455072 containerd[1550]: time="2026-03-13T00:57:04.455048750Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 13 00:57:05.514486 containerd[1550]: time="2026-03-13T00:57:05.514423050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:05.515613 containerd[1550]: time="2026-03-13T00:57:05.515562270Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729830" Mar 13 00:57:05.516204 containerd[1550]: time="2026-03-13T00:57:05.516156470Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:05.519591 containerd[1550]: time="2026-03-13T00:57:05.518626730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:05.519591 containerd[1550]: time="2026-03-13T00:57:05.519464490Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 1.06438245s" Mar 13 00:57:05.519591 containerd[1550]: time="2026-03-13T00:57:05.519493580Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 13 00:57:05.520676 containerd[1550]: time="2026-03-13T00:57:05.520641870Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 13 00:57:06.634866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2166160711.mount: Deactivated successfully. Mar 13 00:57:06.934954 containerd[1550]: time="2026-03-13T00:57:06.934537470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:06.937946 containerd[1550]: time="2026-03-13T00:57:06.936670030Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861776" Mar 13 00:57:06.940857 containerd[1550]: time="2026-03-13T00:57:06.940039470Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:06.942683 containerd[1550]: time="2026-03-13T00:57:06.942656070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:06.943200 containerd[1550]: time="2026-03-13T00:57:06.943161010Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 1.42248523s" Mar 13 00:57:06.943245 containerd[1550]: time="2026-03-13T00:57:06.943201210Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 13 00:57:06.945007 containerd[1550]: time="2026-03-13T00:57:06.944956900Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 13 00:57:07.457509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount419884650.mount: Deactivated successfully. Mar 13 00:57:08.271945 containerd[1550]: time="2026-03-13T00:57:08.271625060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:08.274019 containerd[1550]: time="2026-03-13T00:57:08.273122490Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388013" Mar 13 00:57:08.275341 containerd[1550]: time="2026-03-13T00:57:08.274986490Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:08.277869 containerd[1550]: time="2026-03-13T00:57:08.277832840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:08.278793 containerd[1550]: time="2026-03-13T00:57:08.278752560Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.33375862s" Mar 13 00:57:08.278901 containerd[1550]: time="2026-03-13T00:57:08.278884020Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 13 00:57:08.279703 containerd[1550]: time="2026-03-13T00:57:08.279664290Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 13 00:57:08.751893 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 13 00:57:08.755526 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:57:08.758739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2057330822.mount: Deactivated successfully. Mar 13 00:57:08.763222 containerd[1550]: time="2026-03-13T00:57:08.763112820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:08.763855 containerd[1550]: time="2026-03-13T00:57:08.763823150Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321224" Mar 13 00:57:08.765300 containerd[1550]: time="2026-03-13T00:57:08.764319520Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:08.765882 containerd[1550]: time="2026-03-13T00:57:08.765856890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:08.766494 containerd[1550]: time="2026-03-13T00:57:08.766473770Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 486.77767ms" Mar 13 00:57:08.766577 containerd[1550]: time="2026-03-13T00:57:08.766562310Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 13 00:57:08.767619 containerd[1550]: time="2026-03-13T00:57:08.767567380Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 13 00:57:08.928093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:57:08.934577 (kubelet)[2116]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:57:08.976304 kubelet[2116]: E0313 00:57:08.976203 2116 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:57:08.981176 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:57:08.981419 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:57:08.981823 systemd[1]: kubelet.service: Consumed 199ms CPU time, 108.7M memory peak. Mar 13 00:57:09.265746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount785722848.mount: Deactivated successfully. Mar 13 00:57:09.960664 containerd[1550]: time="2026-03-13T00:57:09.960596270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:09.961657 containerd[1550]: time="2026-03-13T00:57:09.961560380Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860680" Mar 13 00:57:09.962322 containerd[1550]: time="2026-03-13T00:57:09.962272940Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:09.964573 containerd[1550]: time="2026-03-13T00:57:09.964532830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:09.965881 containerd[1550]: time="2026-03-13T00:57:09.965516650Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.19791798s" Mar 13 00:57:09.965881 containerd[1550]: time="2026-03-13T00:57:09.965554360Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 13 00:57:13.132213 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:57:13.132559 systemd[1]: kubelet.service: Consumed 199ms CPU time, 108.7M memory peak. Mar 13 00:57:13.135804 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:57:13.173941 systemd[1]: Reload requested from client PID 2212 ('systemctl') (unit session-5.scope)... Mar 13 00:57:13.174057 systemd[1]: Reloading... Mar 13 00:57:13.371339 zram_generator::config[2278]: No configuration found. Mar 13 00:57:13.555724 systemd[1]: Reloading finished in 381 ms. Mar 13 00:57:13.610344 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 13 00:57:13.610457 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 13 00:57:13.610782 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:57:13.610877 systemd[1]: kubelet.service: Consumed 164ms CPU time, 98.3M memory peak. Mar 13 00:57:13.612481 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:57:13.802190 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:57:13.814693 (kubelet)[2310]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:57:13.861446 kubelet[2310]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:57:13.861446 kubelet[2310]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:57:13.861697 kubelet[2310]: I0313 00:57:13.861527 2310 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:57:14.329755 kubelet[2310]: I0313 00:57:14.329710 2310 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 13 00:57:14.330564 kubelet[2310]: I0313 00:57:14.330220 2310 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:57:14.331218 kubelet[2310]: I0313 00:57:14.331203 2310 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 00:57:14.331332 kubelet[2310]: I0313 00:57:14.331320 2310 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:57:14.331660 kubelet[2310]: I0313 00:57:14.331647 2310 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:57:14.338270 kubelet[2310]: E0313 00:57:14.337802 2310 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.236.110.174:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.236.110.174:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 00:57:14.338270 kubelet[2310]: I0313 00:57:14.338219 2310 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:57:14.345560 kubelet[2310]: I0313 00:57:14.345536 2310 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:57:14.352307 kubelet[2310]: I0313 00:57:14.350552 2310 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 00:57:14.352307 kubelet[2310]: I0313 00:57:14.351871 2310 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:57:14.352307 kubelet[2310]: I0313 00:57:14.351904 2310 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-110-174","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:57:14.352307 kubelet[2310]: I0313 00:57:14.352155 2310 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:57:14.352711 kubelet[2310]: I0313 00:57:14.352166 2310 container_manager_linux.go:306] "Creating device plugin manager" Mar 13 00:57:14.352711 kubelet[2310]: I0313 00:57:14.352316 2310 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 00:57:14.355429 kubelet[2310]: I0313 00:57:14.355409 2310 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:57:14.355736 kubelet[2310]: I0313 00:57:14.355723 2310 kubelet.go:475] "Attempting to sync node with API server" Mar 13 00:57:14.355803 kubelet[2310]: I0313 00:57:14.355792 2310 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:57:14.355872 kubelet[2310]: I0313 00:57:14.355863 2310 kubelet.go:387] "Adding apiserver pod source" Mar 13 00:57:14.355930 kubelet[2310]: I0313 00:57:14.355920 2310 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:57:14.356584 kubelet[2310]: E0313 00:57:14.356255 2310 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.236.110.174:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-110-174&limit=500&resourceVersion=0\": dial tcp 172.236.110.174:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:57:14.359151 kubelet[2310]: E0313 00:57:14.359115 2310 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.236.110.174:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.236.110.174:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:57:14.359502 kubelet[2310]: I0313 00:57:14.359486 2310 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:57:14.360165 kubelet[2310]: I0313 00:57:14.360151 2310 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:57:14.360234 kubelet[2310]: I0313 00:57:14.360225 2310 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 00:57:14.360349 kubelet[2310]: W0313 00:57:14.360335 2310 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 13 00:57:14.364903 kubelet[2310]: I0313 00:57:14.364889 2310 server.go:1262] "Started kubelet" Mar 13 00:57:14.366238 kubelet[2310]: I0313 00:57:14.366224 2310 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:57:14.370826 kubelet[2310]: E0313 00:57:14.368117 2310 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.236.110.174:6443/api/v1/namespaces/default/events\": dial tcp 172.236.110.174:6443: connect: connection refused" event="&Event{ObjectMeta:{172-236-110-174.189c40a5dafbf016 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-236-110-174,UID:172-236-110-174,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-236-110-174,},FirstTimestamp:2026-03-13 00:57:14.36485839 +0000 UTC m=+0.543757471,LastTimestamp:2026-03-13 00:57:14.36485839 +0000 UTC m=+0.543757471,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-236-110-174,}" Mar 13 00:57:14.373247 kubelet[2310]: I0313 00:57:14.373083 2310 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:57:14.374566 kubelet[2310]: I0313 00:57:14.374533 2310 server.go:310] "Adding debug handlers to kubelet server" Mar 13 00:57:14.378236 kubelet[2310]: I0313 00:57:14.378219 2310 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 13 00:57:14.379099 kubelet[2310]: E0313 00:57:14.378534 2310 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-236-110-174\" not found" Mar 13 00:57:14.379099 kubelet[2310]: I0313 00:57:14.378832 2310 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 00:57:14.379099 kubelet[2310]: I0313 00:57:14.378869 2310 reconciler.go:29] "Reconciler: start to sync state" Mar 13 00:57:14.379544 kubelet[2310]: I0313 00:57:14.379498 2310 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:57:14.379603 kubelet[2310]: I0313 00:57:14.379567 2310 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 00:57:14.379827 kubelet[2310]: I0313 00:57:14.379795 2310 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:57:14.380104 kubelet[2310]: I0313 00:57:14.380076 2310 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:57:14.382083 kubelet[2310]: E0313 00:57:14.382047 2310 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.236.110.174:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.236.110.174:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:57:14.382522 kubelet[2310]: E0313 00:57:14.382489 2310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.110.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-110-174?timeout=10s\": dial tcp 172.236.110.174:6443: connect: connection refused" interval="200ms" Mar 13 00:57:14.384103 kubelet[2310]: I0313 00:57:14.384074 2310 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:57:14.384316 kubelet[2310]: I0313 00:57:14.384240 2310 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:57:14.385934 kubelet[2310]: I0313 00:57:14.385919 2310 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:57:14.387329 kubelet[2310]: E0313 00:57:14.386653 2310 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:57:14.418717 kubelet[2310]: I0313 00:57:14.417852 2310 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:57:14.418717 kubelet[2310]: I0313 00:57:14.417893 2310 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:57:14.418717 kubelet[2310]: I0313 00:57:14.417909 2310 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:57:14.421434 kubelet[2310]: I0313 00:57:14.420629 2310 policy_none.go:49] "None policy: Start" Mar 13 00:57:14.421434 kubelet[2310]: I0313 00:57:14.420655 2310 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 00:57:14.421434 kubelet[2310]: I0313 00:57:14.420667 2310 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 00:57:14.421434 kubelet[2310]: I0313 00:57:14.420936 2310 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 00:57:14.422635 kubelet[2310]: I0313 00:57:14.422604 2310 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 00:57:14.422699 kubelet[2310]: I0313 00:57:14.422660 2310 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 13 00:57:14.422699 kubelet[2310]: I0313 00:57:14.422686 2310 kubelet.go:2428] "Starting kubelet main sync loop" Mar 13 00:57:14.422904 kubelet[2310]: E0313 00:57:14.422873 2310 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:57:14.423474 kubelet[2310]: I0313 00:57:14.423446 2310 policy_none.go:47] "Start" Mar 13 00:57:14.424770 kubelet[2310]: E0313 00:57:14.424747 2310 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.236.110.174:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.236.110.174:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:57:14.433896 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 13 00:57:14.449008 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 13 00:57:14.453998 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 13 00:57:14.466781 kubelet[2310]: E0313 00:57:14.466484 2310 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:57:14.467078 kubelet[2310]: I0313 00:57:14.467053 2310 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:57:14.467129 kubelet[2310]: I0313 00:57:14.467077 2310 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:57:14.468753 kubelet[2310]: I0313 00:57:14.467862 2310 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:57:14.469936 kubelet[2310]: E0313 00:57:14.469884 2310 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:57:14.469975 kubelet[2310]: E0313 00:57:14.469947 2310 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-236-110-174\" not found" Mar 13 00:57:14.537032 systemd[1]: Created slice kubepods-burstable-pod01a0e4fa6996a6b36c900a1ffedd5c53.slice - libcontainer container kubepods-burstable-pod01a0e4fa6996a6b36c900a1ffedd5c53.slice. Mar 13 00:57:14.549390 kubelet[2310]: E0313 00:57:14.549326 2310 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-110-174\" not found" node="172-236-110-174" Mar 13 00:57:14.553197 systemd[1]: Created slice kubepods-burstable-pod2af24e7aa761300f4f2c736ba8a436f4.slice - libcontainer container kubepods-burstable-pod2af24e7aa761300f4f2c736ba8a436f4.slice. Mar 13 00:57:14.556811 kubelet[2310]: E0313 00:57:14.556771 2310 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-110-174\" not found" node="172-236-110-174" Mar 13 00:57:14.560032 systemd[1]: Created slice kubepods-burstable-podcf62571ca2cb48bbed3348f19b2b0f84.slice - libcontainer container kubepods-burstable-podcf62571ca2cb48bbed3348f19b2b0f84.slice. Mar 13 00:57:14.561803 kubelet[2310]: E0313 00:57:14.561774 2310 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-110-174\" not found" node="172-236-110-174" Mar 13 00:57:14.569129 kubelet[2310]: I0313 00:57:14.569108 2310 kubelet_node_status.go:75] "Attempting to register node" node="172-236-110-174" Mar 13 00:57:14.569564 kubelet[2310]: E0313 00:57:14.569536 2310 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.236.110.174:6443/api/v1/nodes\": dial tcp 172.236.110.174:6443: connect: connection refused" node="172-236-110-174" Mar 13 00:57:14.583414 kubelet[2310]: E0313 00:57:14.583273 2310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.110.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-110-174?timeout=10s\": dial tcp 172.236.110.174:6443: connect: connection refused" interval="400ms" Mar 13 00:57:14.680687 kubelet[2310]: I0313 00:57:14.680620 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/01a0e4fa6996a6b36c900a1ffedd5c53-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-110-174\" (UID: \"01a0e4fa6996a6b36c900a1ffedd5c53\") " pod="kube-system/kube-controller-manager-172-236-110-174" Mar 13 00:57:14.680687 kubelet[2310]: I0313 00:57:14.680660 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2af24e7aa761300f4f2c736ba8a436f4-kubeconfig\") pod \"kube-scheduler-172-236-110-174\" (UID: \"2af24e7aa761300f4f2c736ba8a436f4\") " pod="kube-system/kube-scheduler-172-236-110-174" Mar 13 00:57:14.680687 kubelet[2310]: I0313 00:57:14.680687 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf62571ca2cb48bbed3348f19b2b0f84-k8s-certs\") pod \"kube-apiserver-172-236-110-174\" (UID: \"cf62571ca2cb48bbed3348f19b2b0f84\") " pod="kube-system/kube-apiserver-172-236-110-174" Mar 13 00:57:14.680687 kubelet[2310]: I0313 00:57:14.680702 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/01a0e4fa6996a6b36c900a1ffedd5c53-ca-certs\") pod \"kube-controller-manager-172-236-110-174\" (UID: \"01a0e4fa6996a6b36c900a1ffedd5c53\") " pod="kube-system/kube-controller-manager-172-236-110-174" Mar 13 00:57:14.680943 kubelet[2310]: I0313 00:57:14.680717 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/01a0e4fa6996a6b36c900a1ffedd5c53-flexvolume-dir\") pod \"kube-controller-manager-172-236-110-174\" (UID: \"01a0e4fa6996a6b36c900a1ffedd5c53\") " pod="kube-system/kube-controller-manager-172-236-110-174" Mar 13 00:57:14.680943 kubelet[2310]: I0313 00:57:14.680733 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf62571ca2cb48bbed3348f19b2b0f84-ca-certs\") pod \"kube-apiserver-172-236-110-174\" (UID: \"cf62571ca2cb48bbed3348f19b2b0f84\") " pod="kube-system/kube-apiserver-172-236-110-174" Mar 13 00:57:14.680943 kubelet[2310]: I0313 00:57:14.680747 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf62571ca2cb48bbed3348f19b2b0f84-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-110-174\" (UID: \"cf62571ca2cb48bbed3348f19b2b0f84\") " pod="kube-system/kube-apiserver-172-236-110-174" Mar 13 00:57:14.680943 kubelet[2310]: I0313 00:57:14.680772 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/01a0e4fa6996a6b36c900a1ffedd5c53-k8s-certs\") pod \"kube-controller-manager-172-236-110-174\" (UID: \"01a0e4fa6996a6b36c900a1ffedd5c53\") " pod="kube-system/kube-controller-manager-172-236-110-174" Mar 13 00:57:14.680943 kubelet[2310]: I0313 00:57:14.680791 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/01a0e4fa6996a6b36c900a1ffedd5c53-kubeconfig\") pod \"kube-controller-manager-172-236-110-174\" (UID: \"01a0e4fa6996a6b36c900a1ffedd5c53\") " pod="kube-system/kube-controller-manager-172-236-110-174" Mar 13 00:57:14.771614 kubelet[2310]: I0313 00:57:14.771572 2310 kubelet_node_status.go:75] "Attempting to register node" node="172-236-110-174" Mar 13 00:57:14.772216 kubelet[2310]: E0313 00:57:14.772142 2310 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.236.110.174:6443/api/v1/nodes\": dial tcp 172.236.110.174:6443: connect: connection refused" node="172-236-110-174" Mar 13 00:57:14.851612 kubelet[2310]: E0313 00:57:14.851451 2310 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:14.852661 containerd[1550]: time="2026-03-13T00:57:14.852626940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-110-174,Uid:01a0e4fa6996a6b36c900a1ffedd5c53,Namespace:kube-system,Attempt:0,}" Mar 13 00:57:14.858838 kubelet[2310]: E0313 00:57:14.858820 2310 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:14.859129 containerd[1550]: time="2026-03-13T00:57:14.859098590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-110-174,Uid:2af24e7aa761300f4f2c736ba8a436f4,Namespace:kube-system,Attempt:0,}" Mar 13 00:57:14.863706 kubelet[2310]: E0313 00:57:14.863682 2310 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:14.864407 containerd[1550]: time="2026-03-13T00:57:14.863994920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-110-174,Uid:cf62571ca2cb48bbed3348f19b2b0f84,Namespace:kube-system,Attempt:0,}" Mar 13 00:57:14.984872 kubelet[2310]: E0313 00:57:14.984788 2310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.110.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-110-174?timeout=10s\": dial tcp 172.236.110.174:6443: connect: connection refused" interval="800ms" Mar 13 00:57:15.174232 kubelet[2310]: I0313 00:57:15.174190 2310 kubelet_node_status.go:75] "Attempting to register node" node="172-236-110-174" Mar 13 00:57:15.174587 kubelet[2310]: E0313 00:57:15.174551 2310 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.236.110.174:6443/api/v1/nodes\": dial tcp 172.236.110.174:6443: connect: connection refused" node="172-236-110-174" Mar 13 00:57:15.312212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3200173893.mount: Deactivated successfully. Mar 13 00:57:15.317015 containerd[1550]: time="2026-03-13T00:57:15.316969200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:57:15.321098 containerd[1550]: time="2026-03-13T00:57:15.321057780Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Mar 13 00:57:15.321507 containerd[1550]: time="2026-03-13T00:57:15.321463610Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:57:15.322013 containerd[1550]: time="2026-03-13T00:57:15.321976280Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:57:15.323394 containerd[1550]: time="2026-03-13T00:57:15.323244600Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:57:15.324262 containerd[1550]: time="2026-03-13T00:57:15.324183160Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 13 00:57:15.324825 containerd[1550]: time="2026-03-13T00:57:15.324806120Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 13 00:57:15.325744 containerd[1550]: time="2026-03-13T00:57:15.325666520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:57:15.327038 containerd[1550]: time="2026-03-13T00:57:15.327002470Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 466.89108ms" Mar 13 00:57:15.328170 containerd[1550]: time="2026-03-13T00:57:15.328116230Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 474.17805ms" Mar 13 00:57:15.329367 containerd[1550]: time="2026-03-13T00:57:15.329151990Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 464.43055ms" Mar 13 00:57:15.364361 containerd[1550]: time="2026-03-13T00:57:15.363579210Z" level=info msg="connecting to shim 90898cb3a3f451e19c4de014346a9f534d2b15cf79e9457429670964cdd815b9" address="unix:///run/containerd/s/06bdd86b125f57b4bef2585604726198b065da2a35bccd193122530ae4648767" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:57:15.376522 containerd[1550]: time="2026-03-13T00:57:15.376460040Z" level=info msg="connecting to shim 6482e1badc354b94e4b8da01b3cdd94f605212e67cf7453a3a1babc9d9a30f28" address="unix:///run/containerd/s/db6b0fa4a04df817062ea04fd91ed276716326009e095d38b73991d65f4b4626" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:57:15.387406 containerd[1550]: time="2026-03-13T00:57:15.387372890Z" level=info msg="connecting to shim a6453022d88cd5c1e4815a83a3825dc48723f0b4ed2181e3681a1fc50bc2ca5c" address="unix:///run/containerd/s/a75c7ac506ade2849326e07f9019e3501f49693aacd3f8b9ad8ac96c9ab20e64" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:57:15.413624 systemd[1]: Started cri-containerd-6482e1badc354b94e4b8da01b3cdd94f605212e67cf7453a3a1babc9d9a30f28.scope - libcontainer container 6482e1badc354b94e4b8da01b3cdd94f605212e67cf7453a3a1babc9d9a30f28. Mar 13 00:57:15.420696 systemd[1]: Started cri-containerd-90898cb3a3f451e19c4de014346a9f534d2b15cf79e9457429670964cdd815b9.scope - libcontainer container 90898cb3a3f451e19c4de014346a9f534d2b15cf79e9457429670964cdd815b9. Mar 13 00:57:15.429165 systemd[1]: Started cri-containerd-a6453022d88cd5c1e4815a83a3825dc48723f0b4ed2181e3681a1fc50bc2ca5c.scope - libcontainer container a6453022d88cd5c1e4815a83a3825dc48723f0b4ed2181e3681a1fc50bc2ca5c. Mar 13 00:57:15.493735 containerd[1550]: time="2026-03-13T00:57:15.493665480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-110-174,Uid:2af24e7aa761300f4f2c736ba8a436f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"90898cb3a3f451e19c4de014346a9f534d2b15cf79e9457429670964cdd815b9\"" Mar 13 00:57:15.495703 kubelet[2310]: E0313 00:57:15.495656 2310 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:15.500957 containerd[1550]: time="2026-03-13T00:57:15.500778620Z" level=info msg="CreateContainer within sandbox \"90898cb3a3f451e19c4de014346a9f534d2b15cf79e9457429670964cdd815b9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 13 00:57:15.526717 containerd[1550]: time="2026-03-13T00:57:15.526663200Z" level=info msg="Container 0a2efa657c8797f29e95b8b406474c048d36d83fe4ae95dd31b56aa435347e1a: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:57:15.538232 containerd[1550]: time="2026-03-13T00:57:15.537995880Z" level=info msg="CreateContainer within sandbox \"90898cb3a3f451e19c4de014346a9f534d2b15cf79e9457429670964cdd815b9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0a2efa657c8797f29e95b8b406474c048d36d83fe4ae95dd31b56aa435347e1a\"" Mar 13 00:57:15.540095 containerd[1550]: time="2026-03-13T00:57:15.540049790Z" level=info msg="StartContainer for \"0a2efa657c8797f29e95b8b406474c048d36d83fe4ae95dd31b56aa435347e1a\"" Mar 13 00:57:15.542756 containerd[1550]: time="2026-03-13T00:57:15.542711820Z" level=info msg="connecting to shim 0a2efa657c8797f29e95b8b406474c048d36d83fe4ae95dd31b56aa435347e1a" address="unix:///run/containerd/s/06bdd86b125f57b4bef2585604726198b065da2a35bccd193122530ae4648767" protocol=ttrpc version=3 Mar 13 00:57:15.549258 containerd[1550]: time="2026-03-13T00:57:15.549156070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-110-174,Uid:01a0e4fa6996a6b36c900a1ffedd5c53,Namespace:kube-system,Attempt:0,} returns sandbox id \"6482e1badc354b94e4b8da01b3cdd94f605212e67cf7453a3a1babc9d9a30f28\"" Mar 13 00:57:15.553704 kubelet[2310]: E0313 00:57:15.553592 2310 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:15.560635 containerd[1550]: time="2026-03-13T00:57:15.560584900Z" level=info msg="CreateContainer within sandbox \"6482e1badc354b94e4b8da01b3cdd94f605212e67cf7453a3a1babc9d9a30f28\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 13 00:57:15.561217 containerd[1550]: time="2026-03-13T00:57:15.561073000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-110-174,Uid:cf62571ca2cb48bbed3348f19b2b0f84,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6453022d88cd5c1e4815a83a3825dc48723f0b4ed2181e3681a1fc50bc2ca5c\"" Mar 13 00:57:15.563490 kubelet[2310]: E0313 00:57:15.563463 2310 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:15.574506 containerd[1550]: time="2026-03-13T00:57:15.574478520Z" level=info msg="CreateContainer within sandbox \"a6453022d88cd5c1e4815a83a3825dc48723f0b4ed2181e3681a1fc50bc2ca5c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 13 00:57:15.574771 kubelet[2310]: E0313 00:57:15.574733 2310 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.236.110.174:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.236.110.174:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:57:15.582967 containerd[1550]: time="2026-03-13T00:57:15.582925670Z" level=info msg="Container 401d8d371c4f0068c95fbdf462406bc7ea69139a9071ff058a78e94f83714c4b: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:57:15.586647 containerd[1550]: time="2026-03-13T00:57:15.586517750Z" level=info msg="Container 57a0e0a31a1804ea5bd90e0fb005381a7d1866f861acf7faab8b2f80db9dc23a: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:57:15.591678 systemd[1]: Started cri-containerd-0a2efa657c8797f29e95b8b406474c048d36d83fe4ae95dd31b56aa435347e1a.scope - libcontainer container 0a2efa657c8797f29e95b8b406474c048d36d83fe4ae95dd31b56aa435347e1a. Mar 13 00:57:15.592959 containerd[1550]: time="2026-03-13T00:57:15.592879360Z" level=info msg="CreateContainer within sandbox \"6482e1badc354b94e4b8da01b3cdd94f605212e67cf7453a3a1babc9d9a30f28\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"401d8d371c4f0068c95fbdf462406bc7ea69139a9071ff058a78e94f83714c4b\"" Mar 13 00:57:15.593895 containerd[1550]: time="2026-03-13T00:57:15.593834940Z" level=info msg="StartContainer for \"401d8d371c4f0068c95fbdf462406bc7ea69139a9071ff058a78e94f83714c4b\"" Mar 13 00:57:15.596491 containerd[1550]: time="2026-03-13T00:57:15.596437610Z" level=info msg="connecting to shim 401d8d371c4f0068c95fbdf462406bc7ea69139a9071ff058a78e94f83714c4b" address="unix:///run/containerd/s/db6b0fa4a04df817062ea04fd91ed276716326009e095d38b73991d65f4b4626" protocol=ttrpc version=3 Mar 13 00:57:15.600131 containerd[1550]: time="2026-03-13T00:57:15.600091580Z" level=info msg="CreateContainer within sandbox \"a6453022d88cd5c1e4815a83a3825dc48723f0b4ed2181e3681a1fc50bc2ca5c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"57a0e0a31a1804ea5bd90e0fb005381a7d1866f861acf7faab8b2f80db9dc23a\"" Mar 13 00:57:15.601722 containerd[1550]: time="2026-03-13T00:57:15.600972550Z" level=info msg="StartContainer for \"57a0e0a31a1804ea5bd90e0fb005381a7d1866f861acf7faab8b2f80db9dc23a\"" Mar 13 00:57:15.605755 containerd[1550]: time="2026-03-13T00:57:15.605732460Z" level=info msg="connecting to shim 57a0e0a31a1804ea5bd90e0fb005381a7d1866f861acf7faab8b2f80db9dc23a" address="unix:///run/containerd/s/a75c7ac506ade2849326e07f9019e3501f49693aacd3f8b9ad8ac96c9ab20e64" protocol=ttrpc version=3 Mar 13 00:57:15.627704 systemd[1]: Started cri-containerd-401d8d371c4f0068c95fbdf462406bc7ea69139a9071ff058a78e94f83714c4b.scope - libcontainer container 401d8d371c4f0068c95fbdf462406bc7ea69139a9071ff058a78e94f83714c4b. Mar 13 00:57:15.639581 systemd[1]: Started cri-containerd-57a0e0a31a1804ea5bd90e0fb005381a7d1866f861acf7faab8b2f80db9dc23a.scope - libcontainer container 57a0e0a31a1804ea5bd90e0fb005381a7d1866f861acf7faab8b2f80db9dc23a. Mar 13 00:57:15.656315 kubelet[2310]: E0313 00:57:15.655126 2310 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.236.110.174:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.236.110.174:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:57:15.720042 containerd[1550]: time="2026-03-13T00:57:15.719771410Z" level=info msg="StartContainer for \"401d8d371c4f0068c95fbdf462406bc7ea69139a9071ff058a78e94f83714c4b\" returns successfully" Mar 13 00:57:15.750720 containerd[1550]: time="2026-03-13T00:57:15.750601210Z" level=info msg="StartContainer for \"0a2efa657c8797f29e95b8b406474c048d36d83fe4ae95dd31b56aa435347e1a\" returns successfully" Mar 13 00:57:15.753039 containerd[1550]: time="2026-03-13T00:57:15.752974050Z" level=info msg="StartContainer for \"57a0e0a31a1804ea5bd90e0fb005381a7d1866f861acf7faab8b2f80db9dc23a\" returns successfully" Mar 13 00:57:15.785882 kubelet[2310]: E0313 00:57:15.785791 2310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.110.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-110-174?timeout=10s\": dial tcp 172.236.110.174:6443: connect: connection refused" interval="1.6s" Mar 13 00:57:15.842830 kubelet[2310]: E0313 00:57:15.842770 2310 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.236.110.174:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.236.110.174:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:57:15.978311 kubelet[2310]: I0313 00:57:15.978161 2310 kubelet_node_status.go:75] "Attempting to register node" node="172-236-110-174" Mar 13 00:57:16.440496 kubelet[2310]: E0313 00:57:16.440458 2310 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-110-174\" not found" node="172-236-110-174" Mar 13 00:57:16.440649 kubelet[2310]: E0313 00:57:16.440586 2310 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:16.444686 kubelet[2310]: E0313 00:57:16.444633 2310 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-110-174\" not found" node="172-236-110-174" Mar 13 00:57:16.444830 kubelet[2310]: E0313 00:57:16.444793 2310 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:16.451497 kubelet[2310]: E0313 00:57:16.451465 2310 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-110-174\" not found" node="172-236-110-174" Mar 13 00:57:16.451653 kubelet[2310]: E0313 00:57:16.451609 2310 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:17.294472 kubelet[2310]: I0313 00:57:17.294424 2310 kubelet_node_status.go:78] "Successfully registered node" node="172-236-110-174" Mar 13 00:57:17.365045 kubelet[2310]: I0313 00:57:17.364993 2310 apiserver.go:52] "Watching apiserver" Mar 13 00:57:17.379170 kubelet[2310]: I0313 00:57:17.379130 2310 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-110-174" Mar 13 00:57:17.405300 kubelet[2310]: E0313 00:57:17.405130 2310 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-236-110-174\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-236-110-174" Mar 13 00:57:17.405300 kubelet[2310]: I0313 00:57:17.405167 2310 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-236-110-174" Mar 13 00:57:17.408516 kubelet[2310]: E0313 00:57:17.408370 2310 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-236-110-174\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-236-110-174" Mar 13 00:57:17.408516 kubelet[2310]: I0313 00:57:17.408394 2310 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-110-174" Mar 13 00:57:17.411678 kubelet[2310]: E0313 00:57:17.411639 2310 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-236-110-174\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-236-110-174" Mar 13 00:57:17.449671 kubelet[2310]: I0313 00:57:17.449628 2310 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-110-174" Mar 13 00:57:17.450332 kubelet[2310]: I0313 00:57:17.450301 2310 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-110-174" Mar 13 00:57:17.453912 kubelet[2310]: E0313 00:57:17.453877 2310 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-236-110-174\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-236-110-174" Mar 13 00:57:17.456343 kubelet[2310]: E0313 00:57:17.456310 2310 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:17.456876 kubelet[2310]: E0313 00:57:17.456837 2310 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-236-110-174\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-236-110-174" Mar 13 00:57:17.456994 kubelet[2310]: E0313 00:57:17.456961 2310 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:17.479336 kubelet[2310]: I0313 00:57:17.479306 2310 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 00:57:18.571481 kubelet[2310]: I0313 00:57:18.571431 2310 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-236-110-174" Mar 13 00:57:18.581866 kubelet[2310]: E0313 00:57:18.581821 2310 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:19.275753 systemd[1]: Reload requested from client PID 2593 ('systemctl') (unit session-5.scope)... Mar 13 00:57:19.275772 systemd[1]: Reloading... Mar 13 00:57:19.398393 zram_generator::config[2643]: No configuration found. Mar 13 00:57:19.454979 kubelet[2310]: E0313 00:57:19.454934 2310 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:19.657026 systemd[1]: Reloading finished in 380 ms. Mar 13 00:57:19.691414 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:57:19.704841 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 00:57:19.705159 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:57:19.705216 systemd[1]: kubelet.service: Consumed 1.017s CPU time, 125M memory peak. Mar 13 00:57:19.709709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:57:19.906215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:57:19.915620 (kubelet)[2688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:57:19.965962 kubelet[2688]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:57:19.965962 kubelet[2688]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:57:19.966621 kubelet[2688]: I0313 00:57:19.966017 2688 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:57:19.973301 kubelet[2688]: I0313 00:57:19.973245 2688 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 13 00:57:19.973301 kubelet[2688]: I0313 00:57:19.973267 2688 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:57:19.973428 kubelet[2688]: I0313 00:57:19.973354 2688 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 00:57:19.973428 kubelet[2688]: I0313 00:57:19.973368 2688 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:57:19.973600 kubelet[2688]: I0313 00:57:19.973566 2688 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:57:19.974853 kubelet[2688]: I0313 00:57:19.974821 2688 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 13 00:57:19.980544 kubelet[2688]: I0313 00:57:19.980506 2688 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:57:19.985798 kubelet[2688]: I0313 00:57:19.985773 2688 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:57:19.990792 kubelet[2688]: I0313 00:57:19.990761 2688 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 00:57:19.991034 kubelet[2688]: I0313 00:57:19.990981 2688 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:57:19.991164 kubelet[2688]: I0313 00:57:19.991019 2688 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-110-174","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:57:19.991164 kubelet[2688]: I0313 00:57:19.991156 2688 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:57:19.991164 kubelet[2688]: I0313 00:57:19.991167 2688 container_manager_linux.go:306] "Creating device plugin manager" Mar 13 00:57:19.991358 kubelet[2688]: I0313 00:57:19.991189 2688 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 00:57:19.991430 kubelet[2688]: I0313 00:57:19.991396 2688 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:57:19.992497 kubelet[2688]: I0313 00:57:19.991616 2688 kubelet.go:475] "Attempting to sync node with API server" Mar 13 00:57:19.992497 kubelet[2688]: I0313 00:57:19.991649 2688 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:57:19.992497 kubelet[2688]: I0313 00:57:19.991673 2688 kubelet.go:387] "Adding apiserver pod source" Mar 13 00:57:19.992497 kubelet[2688]: I0313 00:57:19.991696 2688 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:57:19.993956 kubelet[2688]: I0313 00:57:19.993932 2688 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:57:19.995232 kubelet[2688]: I0313 00:57:19.994821 2688 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:57:19.995675 kubelet[2688]: I0313 00:57:19.995399 2688 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 00:57:20.001506 kubelet[2688]: I0313 00:57:20.001493 2688 server.go:1262] "Started kubelet" Mar 13 00:57:20.004746 kubelet[2688]: I0313 00:57:20.004590 2688 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:57:20.006504 kubelet[2688]: I0313 00:57:20.006490 2688 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:57:20.010442 kubelet[2688]: I0313 00:57:20.010410 2688 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:57:20.010545 kubelet[2688]: I0313 00:57:20.010525 2688 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 00:57:20.010835 kubelet[2688]: I0313 00:57:20.010821 2688 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:57:20.016944 kubelet[2688]: I0313 00:57:20.016913 2688 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:57:20.021877 kubelet[2688]: I0313 00:57:20.021846 2688 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 13 00:57:20.022019 kubelet[2688]: E0313 00:57:20.021987 2688 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-236-110-174\" not found" Mar 13 00:57:20.022372 kubelet[2688]: I0313 00:57:20.022345 2688 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 00:57:20.022514 kubelet[2688]: I0313 00:57:20.022490 2688 reconciler.go:29] "Reconciler: start to sync state" Mar 13 00:57:20.025100 kubelet[2688]: I0313 00:57:20.025069 2688 server.go:310] "Adding debug handlers to kubelet server" Mar 13 00:57:20.029998 kubelet[2688]: I0313 00:57:20.029954 2688 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 00:57:20.031025 kubelet[2688]: I0313 00:57:20.030776 2688 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:57:20.031116 kubelet[2688]: I0313 00:57:20.031084 2688 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:57:20.038929 kubelet[2688]: I0313 00:57:20.038913 2688 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 00:57:20.038998 kubelet[2688]: I0313 00:57:20.038989 2688 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 13 00:57:20.039061 kubelet[2688]: I0313 00:57:20.039053 2688 kubelet.go:2428] "Starting kubelet main sync loop" Mar 13 00:57:20.039161 kubelet[2688]: E0313 00:57:20.039140 2688 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:57:20.041565 kubelet[2688]: I0313 00:57:20.041532 2688 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:57:20.098492 kubelet[2688]: I0313 00:57:20.098461 2688 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:57:20.098492 kubelet[2688]: I0313 00:57:20.098480 2688 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:57:20.098492 kubelet[2688]: I0313 00:57:20.098500 2688 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:57:20.098655 kubelet[2688]: I0313 00:57:20.098629 2688 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 13 00:57:20.098655 kubelet[2688]: I0313 00:57:20.098640 2688 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 13 00:57:20.098655 kubelet[2688]: I0313 00:57:20.098656 2688 policy_none.go:49] "None policy: Start" Mar 13 00:57:20.098742 kubelet[2688]: I0313 00:57:20.098666 2688 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 00:57:20.098742 kubelet[2688]: I0313 00:57:20.098677 2688 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 00:57:20.098783 kubelet[2688]: I0313 00:57:20.098757 2688 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 13 00:57:20.098783 kubelet[2688]: I0313 00:57:20.098766 2688 policy_none.go:47] "Start" Mar 13 00:57:20.109135 kubelet[2688]: E0313 00:57:20.108317 2688 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:57:20.109135 kubelet[2688]: I0313 00:57:20.108708 2688 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:57:20.109135 kubelet[2688]: I0313 00:57:20.108720 2688 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:57:20.110885 kubelet[2688]: I0313 00:57:20.110851 2688 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:57:20.115314 kubelet[2688]: E0313 00:57:20.115238 2688 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:57:20.140897 kubelet[2688]: I0313 00:57:20.140854 2688 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-110-174" Mar 13 00:57:20.141374 kubelet[2688]: I0313 00:57:20.141173 2688 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-110-174" Mar 13 00:57:20.141498 kubelet[2688]: I0313 00:57:20.141466 2688 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-236-110-174" Mar 13 00:57:20.154248 kubelet[2688]: E0313 00:57:20.154130 2688 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-236-110-174\" already exists" pod="kube-system/kube-controller-manager-172-236-110-174" Mar 13 00:57:20.215804 kubelet[2688]: I0313 00:57:20.215653 2688 kubelet_node_status.go:75] "Attempting to register node" node="172-236-110-174" Mar 13 00:57:20.223733 kubelet[2688]: I0313 00:57:20.223621 2688 kubelet_node_status.go:124] "Node was previously registered" node="172-236-110-174" Mar 13 00:57:20.223840 kubelet[2688]: I0313 00:57:20.223812 2688 kubelet_node_status.go:78] "Successfully registered node" node="172-236-110-174" Mar 13 00:57:20.223988 kubelet[2688]: I0313 00:57:20.223919 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf62571ca2cb48bbed3348f19b2b0f84-k8s-certs\") pod \"kube-apiserver-172-236-110-174\" (UID: \"cf62571ca2cb48bbed3348f19b2b0f84\") " pod="kube-system/kube-apiserver-172-236-110-174" Mar 13 00:57:20.223988 kubelet[2688]: I0313 00:57:20.223941 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf62571ca2cb48bbed3348f19b2b0f84-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-110-174\" (UID: \"cf62571ca2cb48bbed3348f19b2b0f84\") " pod="kube-system/kube-apiserver-172-236-110-174" Mar 13 00:57:20.223988 kubelet[2688]: I0313 00:57:20.223960 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/01a0e4fa6996a6b36c900a1ffedd5c53-ca-certs\") pod \"kube-controller-manager-172-236-110-174\" (UID: \"01a0e4fa6996a6b36c900a1ffedd5c53\") " pod="kube-system/kube-controller-manager-172-236-110-174" Mar 13 00:57:20.223988 kubelet[2688]: I0313 00:57:20.223974 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/01a0e4fa6996a6b36c900a1ffedd5c53-kubeconfig\") pod \"kube-controller-manager-172-236-110-174\" (UID: \"01a0e4fa6996a6b36c900a1ffedd5c53\") " pod="kube-system/kube-controller-manager-172-236-110-174" Mar 13 00:57:20.223988 kubelet[2688]: I0313 00:57:20.223989 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2af24e7aa761300f4f2c736ba8a436f4-kubeconfig\") pod \"kube-scheduler-172-236-110-174\" (UID: \"2af24e7aa761300f4f2c736ba8a436f4\") " pod="kube-system/kube-scheduler-172-236-110-174" Mar 13 00:57:20.224133 kubelet[2688]: I0313 00:57:20.224004 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf62571ca2cb48bbed3348f19b2b0f84-ca-certs\") pod \"kube-apiserver-172-236-110-174\" (UID: \"cf62571ca2cb48bbed3348f19b2b0f84\") " pod="kube-system/kube-apiserver-172-236-110-174" Mar 13 00:57:20.224133 kubelet[2688]: I0313 00:57:20.224026 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/01a0e4fa6996a6b36c900a1ffedd5c53-flexvolume-dir\") pod \"kube-controller-manager-172-236-110-174\" (UID: \"01a0e4fa6996a6b36c900a1ffedd5c53\") " pod="kube-system/kube-controller-manager-172-236-110-174" Mar 13 00:57:20.224133 kubelet[2688]: I0313 00:57:20.224040 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/01a0e4fa6996a6b36c900a1ffedd5c53-k8s-certs\") pod \"kube-controller-manager-172-236-110-174\" (UID: \"01a0e4fa6996a6b36c900a1ffedd5c53\") " pod="kube-system/kube-controller-manager-172-236-110-174" Mar 13 00:57:20.224133 kubelet[2688]: I0313 00:57:20.224062 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/01a0e4fa6996a6b36c900a1ffedd5c53-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-110-174\" (UID: \"01a0e4fa6996a6b36c900a1ffedd5c53\") " pod="kube-system/kube-controller-manager-172-236-110-174" Mar 13 00:57:20.448752 kubelet[2688]: E0313 00:57:20.448721 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:20.454127 kubelet[2688]: E0313 00:57:20.454084 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:20.454309 kubelet[2688]: E0313 00:57:20.454260 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:20.999857 kubelet[2688]: I0313 00:57:20.998216 2688 apiserver.go:52] "Watching apiserver" Mar 13 00:57:21.022802 kubelet[2688]: I0313 00:57:21.022760 2688 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 00:57:21.042072 kubelet[2688]: I0313 00:57:21.041996 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-236-110-174" podStartSLOduration=1.04197677 podStartE2EDuration="1.04197677s" podCreationTimestamp="2026-03-13 00:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:57:21.04137198 +0000 UTC m=+1.120769131" watchObservedRunningTime="2026-03-13 00:57:21.04197677 +0000 UTC m=+1.121373931" Mar 13 00:57:21.042258 kubelet[2688]: I0313 00:57:21.042127 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-236-110-174" podStartSLOduration=1.0421211 podStartE2EDuration="1.0421211s" podCreationTimestamp="2026-03-13 00:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:57:21.03299863 +0000 UTC m=+1.112395781" watchObservedRunningTime="2026-03-13 00:57:21.0421211 +0000 UTC m=+1.121518251" Mar 13 00:57:21.056776 kubelet[2688]: I0313 00:57:21.056713 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-236-110-174" podStartSLOduration=3.05669953 podStartE2EDuration="3.05669953s" podCreationTimestamp="2026-03-13 00:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:57:21.05659657 +0000 UTC m=+1.135993721" watchObservedRunningTime="2026-03-13 00:57:21.05669953 +0000 UTC m=+1.136096681" Mar 13 00:57:21.079762 kubelet[2688]: E0313 00:57:21.079727 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:21.081744 kubelet[2688]: E0313 00:57:21.081721 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:21.083294 kubelet[2688]: I0313 00:57:21.081835 2688 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-110-174" Mar 13 00:57:21.096177 kubelet[2688]: E0313 00:57:21.095985 2688 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-236-110-174\" already exists" pod="kube-system/kube-scheduler-172-236-110-174" Mar 13 00:57:21.096177 kubelet[2688]: E0313 00:57:21.096119 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:21.347332 sudo[1750]: pam_unix(sudo:session): session closed for user root Mar 13 00:57:21.368793 sshd[1749]: Connection closed by 68.220.241.50 port 49894 Mar 13 00:57:21.369372 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Mar 13 00:57:21.375842 systemd[1]: sshd@4-172.236.110.174:22-68.220.241.50:49894.service: Deactivated successfully. Mar 13 00:57:21.378829 systemd[1]: session-5.scope: Deactivated successfully. Mar 13 00:57:21.379099 systemd[1]: session-5.scope: Consumed 4.503s CPU time, 234.1M memory peak. Mar 13 00:57:21.382264 systemd-logind[1530]: Session 5 logged out. Waiting for processes to exit. Mar 13 00:57:21.383902 systemd-logind[1530]: Removed session 5. Mar 13 00:57:22.081051 kubelet[2688]: E0313 00:57:22.080793 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:22.081051 kubelet[2688]: E0313 00:57:22.080793 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:23.083301 kubelet[2688]: E0313 00:57:23.083229 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:24.692211 kubelet[2688]: I0313 00:57:24.692161 2688 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 13 00:57:24.692846 kubelet[2688]: I0313 00:57:24.692760 2688 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 13 00:57:24.692878 containerd[1550]: time="2026-03-13T00:57:24.692570440Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 13 00:57:25.642273 systemd[1]: Created slice kubepods-besteffort-pod2eca3cfc_842a_4ab5_b411_13338b75a353.slice - libcontainer container kubepods-besteffort-pod2eca3cfc_842a_4ab5_b411_13338b75a353.slice. Mar 13 00:57:25.660077 systemd[1]: Created slice kubepods-burstable-pod83d874ea_0376_4865_9eee_a9113b1fcce6.slice - libcontainer container kubepods-burstable-pod83d874ea_0376_4865_9eee_a9113b1fcce6.slice. Mar 13 00:57:25.662193 kubelet[2688]: I0313 00:57:25.661090 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2eca3cfc-842a-4ab5-b411-13338b75a353-xtables-lock\") pod \"kube-proxy-qnfwk\" (UID: \"2eca3cfc-842a-4ab5-b411-13338b75a353\") " pod="kube-system/kube-proxy-qnfwk" Mar 13 00:57:25.662193 kubelet[2688]: I0313 00:57:25.661114 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2eca3cfc-842a-4ab5-b411-13338b75a353-lib-modules\") pod \"kube-proxy-qnfwk\" (UID: \"2eca3cfc-842a-4ab5-b411-13338b75a353\") " pod="kube-system/kube-proxy-qnfwk" Mar 13 00:57:25.662193 kubelet[2688]: I0313 00:57:25.661147 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9b7v\" (UniqueName: \"kubernetes.io/projected/2eca3cfc-842a-4ab5-b411-13338b75a353-kube-api-access-w9b7v\") pod \"kube-proxy-qnfwk\" (UID: \"2eca3cfc-842a-4ab5-b411-13338b75a353\") " pod="kube-system/kube-proxy-qnfwk" Mar 13 00:57:25.662193 kubelet[2688]: I0313 00:57:25.661164 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/83d874ea-0376-4865-9eee-a9113b1fcce6-cni-plugin\") pod \"kube-flannel-ds-vv85n\" (UID: \"83d874ea-0376-4865-9eee-a9113b1fcce6\") " pod="kube-flannel/kube-flannel-ds-vv85n" Mar 13 00:57:25.662193 kubelet[2688]: I0313 00:57:25.661180 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83d874ea-0376-4865-9eee-a9113b1fcce6-xtables-lock\") pod \"kube-flannel-ds-vv85n\" (UID: \"83d874ea-0376-4865-9eee-a9113b1fcce6\") " pod="kube-flannel/kube-flannel-ds-vv85n" Mar 13 00:57:25.663490 kubelet[2688]: I0313 00:57:25.661199 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/83d874ea-0376-4865-9eee-a9113b1fcce6-run\") pod \"kube-flannel-ds-vv85n\" (UID: \"83d874ea-0376-4865-9eee-a9113b1fcce6\") " pod="kube-flannel/kube-flannel-ds-vv85n" Mar 13 00:57:25.663490 kubelet[2688]: I0313 00:57:25.661211 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/83d874ea-0376-4865-9eee-a9113b1fcce6-cni\") pod \"kube-flannel-ds-vv85n\" (UID: \"83d874ea-0376-4865-9eee-a9113b1fcce6\") " pod="kube-flannel/kube-flannel-ds-vv85n" Mar 13 00:57:25.663490 kubelet[2688]: I0313 00:57:25.661223 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/83d874ea-0376-4865-9eee-a9113b1fcce6-flannel-cfg\") pod \"kube-flannel-ds-vv85n\" (UID: \"83d874ea-0376-4865-9eee-a9113b1fcce6\") " pod="kube-flannel/kube-flannel-ds-vv85n" Mar 13 00:57:25.663490 kubelet[2688]: I0313 00:57:25.661242 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jqlq\" (UniqueName: \"kubernetes.io/projected/83d874ea-0376-4865-9eee-a9113b1fcce6-kube-api-access-7jqlq\") pod \"kube-flannel-ds-vv85n\" (UID: \"83d874ea-0376-4865-9eee-a9113b1fcce6\") " pod="kube-flannel/kube-flannel-ds-vv85n" Mar 13 00:57:25.663490 kubelet[2688]: I0313 00:57:25.661256 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2eca3cfc-842a-4ab5-b411-13338b75a353-kube-proxy\") pod \"kube-proxy-qnfwk\" (UID: \"2eca3cfc-842a-4ab5-b411-13338b75a353\") " pod="kube-system/kube-proxy-qnfwk" Mar 13 00:57:25.957995 kubelet[2688]: E0313 00:57:25.957641 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:25.959149 containerd[1550]: time="2026-03-13T00:57:25.958811490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qnfwk,Uid:2eca3cfc-842a-4ab5-b411-13338b75a353,Namespace:kube-system,Attempt:0,}" Mar 13 00:57:25.971315 kubelet[2688]: E0313 00:57:25.967149 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:25.974711 containerd[1550]: time="2026-03-13T00:57:25.973497130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-vv85n,Uid:83d874ea-0376-4865-9eee-a9113b1fcce6,Namespace:kube-flannel,Attempt:0,}" Mar 13 00:57:25.982046 containerd[1550]: time="2026-03-13T00:57:25.982018190Z" level=info msg="connecting to shim 99e5d0c1d050908aa9dd754d7ec4927da4c0e74535d1474362a6737749b39122" address="unix:///run/containerd/s/1c4ce2f8f5890bb0381070d8b5652a36efaffc4db2f9f9d268f837b87c638db0" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:57:26.003420 containerd[1550]: time="2026-03-13T00:57:26.003364960Z" level=info msg="connecting to shim 7d0cb6a8178b31866652680647b7cdf6e841991389395cb951609ad68fd39099" address="unix:///run/containerd/s/21c2ba81adc25b3a08fc04c1634f27ca329c6e48481a058a80036be29e358e5f" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:57:26.018988 systemd[1]: Started cri-containerd-99e5d0c1d050908aa9dd754d7ec4927da4c0e74535d1474362a6737749b39122.scope - libcontainer container 99e5d0c1d050908aa9dd754d7ec4927da4c0e74535d1474362a6737749b39122. Mar 13 00:57:26.043422 systemd[1]: Started cri-containerd-7d0cb6a8178b31866652680647b7cdf6e841991389395cb951609ad68fd39099.scope - libcontainer container 7d0cb6a8178b31866652680647b7cdf6e841991389395cb951609ad68fd39099. Mar 13 00:57:26.075851 containerd[1550]: time="2026-03-13T00:57:26.075805970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qnfwk,Uid:2eca3cfc-842a-4ab5-b411-13338b75a353,Namespace:kube-system,Attempt:0,} returns sandbox id \"99e5d0c1d050908aa9dd754d7ec4927da4c0e74535d1474362a6737749b39122\"" Mar 13 00:57:26.078243 kubelet[2688]: E0313 00:57:26.078205 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:26.099771 containerd[1550]: time="2026-03-13T00:57:26.099740950Z" level=info msg="CreateContainer within sandbox \"99e5d0c1d050908aa9dd754d7ec4927da4c0e74535d1474362a6737749b39122\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 13 00:57:26.114518 containerd[1550]: time="2026-03-13T00:57:26.113107740Z" level=info msg="Container 8d03168f4e4c644b1b52000f54105e2435867d3e442ce5c9503a34c3e1d4440e: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:57:26.114518 containerd[1550]: time="2026-03-13T00:57:26.113364430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-vv85n,Uid:83d874ea-0376-4865-9eee-a9113b1fcce6,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"7d0cb6a8178b31866652680647b7cdf6e841991389395cb951609ad68fd39099\"" Mar 13 00:57:26.116686 kubelet[2688]: E0313 00:57:26.116645 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:26.121489 containerd[1550]: time="2026-03-13T00:57:26.121468070Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Mar 13 00:57:26.124767 containerd[1550]: time="2026-03-13T00:57:26.124730940Z" level=info msg="CreateContainer within sandbox \"99e5d0c1d050908aa9dd754d7ec4927da4c0e74535d1474362a6737749b39122\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8d03168f4e4c644b1b52000f54105e2435867d3e442ce5c9503a34c3e1d4440e\"" Mar 13 00:57:26.125637 containerd[1550]: time="2026-03-13T00:57:26.125619230Z" level=info msg="StartContainer for \"8d03168f4e4c644b1b52000f54105e2435867d3e442ce5c9503a34c3e1d4440e\"" Mar 13 00:57:26.128038 containerd[1550]: time="2026-03-13T00:57:26.128002810Z" level=info msg="connecting to shim 8d03168f4e4c644b1b52000f54105e2435867d3e442ce5c9503a34c3e1d4440e" address="unix:///run/containerd/s/1c4ce2f8f5890bb0381070d8b5652a36efaffc4db2f9f9d268f837b87c638db0" protocol=ttrpc version=3 Mar 13 00:57:26.150422 systemd[1]: Started cri-containerd-8d03168f4e4c644b1b52000f54105e2435867d3e442ce5c9503a34c3e1d4440e.scope - libcontainer container 8d03168f4e4c644b1b52000f54105e2435867d3e442ce5c9503a34c3e1d4440e. Mar 13 00:57:26.229520 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 13 00:57:26.259951 containerd[1550]: time="2026-03-13T00:57:26.259851320Z" level=info msg="StartContainer for \"8d03168f4e4c644b1b52000f54105e2435867d3e442ce5c9503a34c3e1d4440e\" returns successfully" Mar 13 00:57:26.893632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1131705726.mount: Deactivated successfully. Mar 13 00:57:26.931485 containerd[1550]: time="2026-03-13T00:57:26.931414640Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:26.932352 containerd[1550]: time="2026-03-13T00:57:26.932325810Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Mar 13 00:57:26.933130 containerd[1550]: time="2026-03-13T00:57:26.932835900Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:26.935018 containerd[1550]: time="2026-03-13T00:57:26.934996460Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:26.936134 containerd[1550]: time="2026-03-13T00:57:26.936104580Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 814.50338ms" Mar 13 00:57:26.936171 containerd[1550]: time="2026-03-13T00:57:26.936135820Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Mar 13 00:57:26.940082 containerd[1550]: time="2026-03-13T00:57:26.940058640Z" level=info msg="CreateContainer within sandbox \"7d0cb6a8178b31866652680647b7cdf6e841991389395cb951609ad68fd39099\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Mar 13 00:57:26.945915 containerd[1550]: time="2026-03-13T00:57:26.945889130Z" level=info msg="Container 9889ae1026e707f5fe7940fae5b8d3aeb9470aab67e76c9a4eca98f80b21ccb2: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:57:26.951600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount297460272.mount: Deactivated successfully. Mar 13 00:57:26.953775 containerd[1550]: time="2026-03-13T00:57:26.953751090Z" level=info msg="CreateContainer within sandbox \"7d0cb6a8178b31866652680647b7cdf6e841991389395cb951609ad68fd39099\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"9889ae1026e707f5fe7940fae5b8d3aeb9470aab67e76c9a4eca98f80b21ccb2\"" Mar 13 00:57:26.954241 containerd[1550]: time="2026-03-13T00:57:26.954219820Z" level=info msg="StartContainer for \"9889ae1026e707f5fe7940fae5b8d3aeb9470aab67e76c9a4eca98f80b21ccb2\"" Mar 13 00:57:26.956111 containerd[1550]: time="2026-03-13T00:57:26.954948170Z" level=info msg="connecting to shim 9889ae1026e707f5fe7940fae5b8d3aeb9470aab67e76c9a4eca98f80b21ccb2" address="unix:///run/containerd/s/21c2ba81adc25b3a08fc04c1634f27ca329c6e48481a058a80036be29e358e5f" protocol=ttrpc version=3 Mar 13 00:57:26.979440 systemd[1]: Started cri-containerd-9889ae1026e707f5fe7940fae5b8d3aeb9470aab67e76c9a4eca98f80b21ccb2.scope - libcontainer container 9889ae1026e707f5fe7940fae5b8d3aeb9470aab67e76c9a4eca98f80b21ccb2. Mar 13 00:57:27.016001 containerd[1550]: time="2026-03-13T00:57:27.015886910Z" level=info msg="StartContainer for \"9889ae1026e707f5fe7940fae5b8d3aeb9470aab67e76c9a4eca98f80b21ccb2\" returns successfully" Mar 13 00:57:27.018532 systemd[1]: cri-containerd-9889ae1026e707f5fe7940fae5b8d3aeb9470aab67e76c9a4eca98f80b21ccb2.scope: Deactivated successfully. Mar 13 00:57:27.022596 containerd[1550]: time="2026-03-13T00:57:27.022241300Z" level=info msg="received container exit event container_id:\"9889ae1026e707f5fe7940fae5b8d3aeb9470aab67e76c9a4eca98f80b21ccb2\" id:\"9889ae1026e707f5fe7940fae5b8d3aeb9470aab67e76c9a4eca98f80b21ccb2\" pid:3026 exited_at:{seconds:1773363447 nanos:21733750}" Mar 13 00:57:27.102339 kubelet[2688]: E0313 00:57:27.102308 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:27.108463 kubelet[2688]: E0313 00:57:27.108329 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:27.109952 containerd[1550]: time="2026-03-13T00:57:27.109712670Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Mar 13 00:57:27.125345 kubelet[2688]: I0313 00:57:27.125297 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qnfwk" podStartSLOduration=2.12526583 podStartE2EDuration="2.12526583s" podCreationTimestamp="2026-03-13 00:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:57:27.115981 +0000 UTC m=+7.195378151" watchObservedRunningTime="2026-03-13 00:57:27.12526583 +0000 UTC m=+7.204662981" Mar 13 00:57:27.586646 kubelet[2688]: E0313 00:57:27.586587 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:28.110440 kubelet[2688]: E0313 00:57:28.110408 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:28.111662 kubelet[2688]: E0313 00:57:28.111129 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:28.500557 containerd[1550]: time="2026-03-13T00:57:28.499558710Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:28.500557 containerd[1550]: time="2026-03-13T00:57:28.500524010Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Mar 13 00:57:28.501098 containerd[1550]: time="2026-03-13T00:57:28.501077010Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:28.503791 containerd[1550]: time="2026-03-13T00:57:28.503761900Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:57:28.505345 containerd[1550]: time="2026-03-13T00:57:28.505311600Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 1.39556484s" Mar 13 00:57:28.505407 containerd[1550]: time="2026-03-13T00:57:28.505345240Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Mar 13 00:57:28.510082 containerd[1550]: time="2026-03-13T00:57:28.509846110Z" level=info msg="CreateContainer within sandbox \"7d0cb6a8178b31866652680647b7cdf6e841991389395cb951609ad68fd39099\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 13 00:57:28.519897 containerd[1550]: time="2026-03-13T00:57:28.519394670Z" level=info msg="Container 110e09ec7425707629f5c268680d6ae50f7fc8e80008a3e3657110e9de944f86: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:57:28.522621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount793989453.mount: Deactivated successfully. Mar 13 00:57:28.528246 containerd[1550]: time="2026-03-13T00:57:28.528198620Z" level=info msg="CreateContainer within sandbox \"7d0cb6a8178b31866652680647b7cdf6e841991389395cb951609ad68fd39099\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"110e09ec7425707629f5c268680d6ae50f7fc8e80008a3e3657110e9de944f86\"" Mar 13 00:57:28.529604 containerd[1550]: time="2026-03-13T00:57:28.529577670Z" level=info msg="StartContainer for \"110e09ec7425707629f5c268680d6ae50f7fc8e80008a3e3657110e9de944f86\"" Mar 13 00:57:28.531470 containerd[1550]: time="2026-03-13T00:57:28.531410470Z" level=info msg="connecting to shim 110e09ec7425707629f5c268680d6ae50f7fc8e80008a3e3657110e9de944f86" address="unix:///run/containerd/s/21c2ba81adc25b3a08fc04c1634f27ca329c6e48481a058a80036be29e358e5f" protocol=ttrpc version=3 Mar 13 00:57:28.560450 systemd[1]: Started cri-containerd-110e09ec7425707629f5c268680d6ae50f7fc8e80008a3e3657110e9de944f86.scope - libcontainer container 110e09ec7425707629f5c268680d6ae50f7fc8e80008a3e3657110e9de944f86. Mar 13 00:57:28.592746 systemd[1]: cri-containerd-110e09ec7425707629f5c268680d6ae50f7fc8e80008a3e3657110e9de944f86.scope: Deactivated successfully. Mar 13 00:57:28.594397 containerd[1550]: time="2026-03-13T00:57:28.594249240Z" level=info msg="received container exit event container_id:\"110e09ec7425707629f5c268680d6ae50f7fc8e80008a3e3657110e9de944f86\" id:\"110e09ec7425707629f5c268680d6ae50f7fc8e80008a3e3657110e9de944f86\" pid:3101 exited_at:{seconds:1773363448 nanos:593625350}" Mar 13 00:57:28.611253 containerd[1550]: time="2026-03-13T00:57:28.611220880Z" level=info msg="StartContainer for \"110e09ec7425707629f5c268680d6ae50f7fc8e80008a3e3657110e9de944f86\" returns successfully" Mar 13 00:57:28.629836 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-110e09ec7425707629f5c268680d6ae50f7fc8e80008a3e3657110e9de944f86-rootfs.mount: Deactivated successfully. Mar 13 00:57:28.649062 kubelet[2688]: I0313 00:57:28.649036 2688 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 13 00:57:28.693586 systemd[1]: Created slice kubepods-burstable-pod2007d545_7497_4f8a_87a0_bd161c73bd1a.slice - libcontainer container kubepods-burstable-pod2007d545_7497_4f8a_87a0_bd161c73bd1a.slice. Mar 13 00:57:28.702588 systemd[1]: Created slice kubepods-burstable-pod6d071bf0_0e16_4f1a_8e8e_24dc3e39e371.slice - libcontainer container kubepods-burstable-pod6d071bf0_0e16_4f1a_8e8e_24dc3e39e371.slice. Mar 13 00:57:28.782389 kubelet[2688]: I0313 00:57:28.779835 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcbx8\" (UniqueName: \"kubernetes.io/projected/2007d545-7497-4f8a-87a0-bd161c73bd1a-kube-api-access-hcbx8\") pod \"coredns-66bc5c9577-fwnrz\" (UID: \"2007d545-7497-4f8a-87a0-bd161c73bd1a\") " pod="kube-system/coredns-66bc5c9577-fwnrz" Mar 13 00:57:28.782389 kubelet[2688]: I0313 00:57:28.779872 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvzk5\" (UniqueName: \"kubernetes.io/projected/6d071bf0-0e16-4f1a-8e8e-24dc3e39e371-kube-api-access-cvzk5\") pod \"coredns-66bc5c9577-8ldtr\" (UID: \"6d071bf0-0e16-4f1a-8e8e-24dc3e39e371\") " pod="kube-system/coredns-66bc5c9577-8ldtr" Mar 13 00:57:28.782389 kubelet[2688]: I0313 00:57:28.779911 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2007d545-7497-4f8a-87a0-bd161c73bd1a-config-volume\") pod \"coredns-66bc5c9577-fwnrz\" (UID: \"2007d545-7497-4f8a-87a0-bd161c73bd1a\") " pod="kube-system/coredns-66bc5c9577-fwnrz" Mar 13 00:57:28.782389 kubelet[2688]: I0313 00:57:28.779930 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d071bf0-0e16-4f1a-8e8e-24dc3e39e371-config-volume\") pod \"coredns-66bc5c9577-8ldtr\" (UID: \"6d071bf0-0e16-4f1a-8e8e-24dc3e39e371\") " pod="kube-system/coredns-66bc5c9577-8ldtr" Mar 13 00:57:29.003113 kubelet[2688]: E0313 00:57:29.003072 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:29.004044 containerd[1550]: time="2026-03-13T00:57:29.003995840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fwnrz,Uid:2007d545-7497-4f8a-87a0-bd161c73bd1a,Namespace:kube-system,Attempt:0,}" Mar 13 00:57:29.008464 kubelet[2688]: E0313 00:57:29.008391 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:29.008911 containerd[1550]: time="2026-03-13T00:57:29.008870900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8ldtr,Uid:6d071bf0-0e16-4f1a-8e8e-24dc3e39e371,Namespace:kube-system,Attempt:0,}" Mar 13 00:57:29.037421 containerd[1550]: time="2026-03-13T00:57:29.037170770Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fwnrz,Uid:2007d545-7497-4f8a-87a0-bd161c73bd1a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e42d054662eff5b27837ee90c48b2548ef68cac4f27a62a9ceac92e9046e7ad\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 13 00:57:29.037574 systemd[1]: run-netns-cni\x2d0db0ddfd\x2d0df9\x2d4b86\x2d4111\x2dad495fa1ceee.mount: Deactivated successfully. Mar 13 00:57:29.038495 kubelet[2688]: E0313 00:57:29.038449 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e42d054662eff5b27837ee90c48b2548ef68cac4f27a62a9ceac92e9046e7ad\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 13 00:57:29.038652 kubelet[2688]: E0313 00:57:29.038631 2688 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e42d054662eff5b27837ee90c48b2548ef68cac4f27a62a9ceac92e9046e7ad\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-fwnrz" Mar 13 00:57:29.038719 kubelet[2688]: E0313 00:57:29.038706 2688 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e42d054662eff5b27837ee90c48b2548ef68cac4f27a62a9ceac92e9046e7ad\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-fwnrz" Mar 13 00:57:29.038831 kubelet[2688]: E0313 00:57:29.038807 2688 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-fwnrz_kube-system(2007d545-7497-4f8a-87a0-bd161c73bd1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-fwnrz_kube-system(2007d545-7497-4f8a-87a0-bd161c73bd1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e42d054662eff5b27837ee90c48b2548ef68cac4f27a62a9ceac92e9046e7ad\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-fwnrz" podUID="2007d545-7497-4f8a-87a0-bd161c73bd1a" Mar 13 00:57:29.042026 containerd[1550]: time="2026-03-13T00:57:29.041982360Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8ldtr,Uid:6d071bf0-0e16-4f1a-8e8e-24dc3e39e371,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"25b43423e73f7faf8ab780040394261dc1b09c089e7d587ae5da34d79842edc7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 13 00:57:29.042128 kubelet[2688]: E0313 00:57:29.042107 2688 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25b43423e73f7faf8ab780040394261dc1b09c089e7d587ae5da34d79842edc7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 13 00:57:29.042183 kubelet[2688]: E0313 00:57:29.042134 2688 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25b43423e73f7faf8ab780040394261dc1b09c089e7d587ae5da34d79842edc7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-8ldtr" Mar 13 00:57:29.042183 kubelet[2688]: E0313 00:57:29.042147 2688 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25b43423e73f7faf8ab780040394261dc1b09c089e7d587ae5da34d79842edc7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-8ldtr" Mar 13 00:57:29.042232 kubelet[2688]: E0313 00:57:29.042181 2688 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-8ldtr_kube-system(6d071bf0-0e16-4f1a-8e8e-24dc3e39e371)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-8ldtr_kube-system(6d071bf0-0e16-4f1a-8e8e-24dc3e39e371)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25b43423e73f7faf8ab780040394261dc1b09c089e7d587ae5da34d79842edc7\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-8ldtr" podUID="6d071bf0-0e16-4f1a-8e8e-24dc3e39e371" Mar 13 00:57:29.114977 kubelet[2688]: E0313 00:57:29.114661 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:29.120537 containerd[1550]: time="2026-03-13T00:57:29.120490400Z" level=info msg="CreateContainer within sandbox \"7d0cb6a8178b31866652680647b7cdf6e841991389395cb951609ad68fd39099\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Mar 13 00:57:29.127529 containerd[1550]: time="2026-03-13T00:57:29.127496880Z" level=info msg="Container ae257cb0bc7fad596b8f2549db26c2287ece7ff6e160b9fd488676b3ee8c271f: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:57:29.132541 containerd[1550]: time="2026-03-13T00:57:29.132509980Z" level=info msg="CreateContainer within sandbox \"7d0cb6a8178b31866652680647b7cdf6e841991389395cb951609ad68fd39099\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"ae257cb0bc7fad596b8f2549db26c2287ece7ff6e160b9fd488676b3ee8c271f\"" Mar 13 00:57:29.135312 containerd[1550]: time="2026-03-13T00:57:29.133367310Z" level=info msg="StartContainer for \"ae257cb0bc7fad596b8f2549db26c2287ece7ff6e160b9fd488676b3ee8c271f\"" Mar 13 00:57:29.135583 containerd[1550]: time="2026-03-13T00:57:29.135557550Z" level=info msg="connecting to shim ae257cb0bc7fad596b8f2549db26c2287ece7ff6e160b9fd488676b3ee8c271f" address="unix:///run/containerd/s/21c2ba81adc25b3a08fc04c1634f27ca329c6e48481a058a80036be29e358e5f" protocol=ttrpc version=3 Mar 13 00:57:29.176715 systemd[1]: Started cri-containerd-ae257cb0bc7fad596b8f2549db26c2287ece7ff6e160b9fd488676b3ee8c271f.scope - libcontainer container ae257cb0bc7fad596b8f2549db26c2287ece7ff6e160b9fd488676b3ee8c271f. Mar 13 00:57:29.229295 containerd[1550]: time="2026-03-13T00:57:29.229239980Z" level=info msg="StartContainer for \"ae257cb0bc7fad596b8f2549db26c2287ece7ff6e160b9fd488676b3ee8c271f\" returns successfully" Mar 13 00:57:29.418205 kubelet[2688]: E0313 00:57:29.417956 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:29.894325 systemd[1]: run-netns-cni\x2d638c0370\x2da807\x2d4cbb\x2d1aa2\x2d1b236cabd05f.mount: Deactivated successfully. Mar 13 00:57:30.117731 kubelet[2688]: E0313 00:57:30.117691 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:30.120399 kubelet[2688]: E0313 00:57:30.118188 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:30.135616 kubelet[2688]: I0313 00:57:30.135567 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-vv85n" podStartSLOduration=2.749630385 podStartE2EDuration="5.135554015s" podCreationTimestamp="2026-03-13 00:57:25 +0000 UTC" firstStartedPulling="2026-03-13 00:57:26.12093388 +0000 UTC m=+6.200331031" lastFinishedPulling="2026-03-13 00:57:28.50685751 +0000 UTC m=+8.586254661" observedRunningTime="2026-03-13 00:57:30.135349788 +0000 UTC m=+10.214746949" watchObservedRunningTime="2026-03-13 00:57:30.135554015 +0000 UTC m=+10.214951166" Mar 13 00:57:30.290156 systemd-networkd[1442]: flannel.1: Link UP Mar 13 00:57:30.290166 systemd-networkd[1442]: flannel.1: Gained carrier Mar 13 00:57:31.692468 systemd-networkd[1442]: flannel.1: Gained IPv6LL Mar 13 00:57:32.048366 kubelet[2688]: E0313 00:57:32.047893 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:32.121793 kubelet[2688]: E0313 00:57:32.121704 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:40.745499 update_engine[1531]: I20260313 00:57:40.745141 1531 update_attempter.cc:509] Updating boot flags... Mar 13 00:57:42.041841 kubelet[2688]: E0313 00:57:42.041792 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:42.043916 containerd[1550]: time="2026-03-13T00:57:42.043177320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8ldtr,Uid:6d071bf0-0e16-4f1a-8e8e-24dc3e39e371,Namespace:kube-system,Attempt:0,}" Mar 13 00:57:42.062603 systemd-networkd[1442]: cni0: Link UP Mar 13 00:57:42.074857 systemd-networkd[1442]: vethebfaa740: Link UP Mar 13 00:57:42.080600 kernel: cni0: port 1(vethebfaa740) entered blocking state Mar 13 00:57:42.080662 kernel: cni0: port 1(vethebfaa740) entered disabled state Mar 13 00:57:42.083990 kernel: vethebfaa740: entered allmulticast mode Mar 13 00:57:42.084033 kernel: vethebfaa740: entered promiscuous mode Mar 13 00:57:42.094711 kernel: cni0: port 1(vethebfaa740) entered blocking state Mar 13 00:57:42.094794 kernel: cni0: port 1(vethebfaa740) entered forwarding state Mar 13 00:57:42.094209 systemd-networkd[1442]: vethebfaa740: Gained carrier Mar 13 00:57:42.095467 systemd-networkd[1442]: cni0: Gained carrier Mar 13 00:57:42.100266 containerd[1550]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000082950), "name":"cbr0", "type":"bridge"} Mar 13 00:57:42.100266 containerd[1550]: delegateAdd: netconf sent to delegate plugin: Mar 13 00:57:42.127524 containerd[1550]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-03-13T00:57:42.127447597Z" level=info msg="connecting to shim 3a4e2c619f978ba8e939f0b01b1677ccf8c6c3ef91653c69f6f0c063dfda3164" address="unix:///run/containerd/s/fbebd196654a5797ba04e20edff8ff190b05b058660af052d53ed9d67d56d9da" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:57:42.169431 systemd[1]: Started cri-containerd-3a4e2c619f978ba8e939f0b01b1677ccf8c6c3ef91653c69f6f0c063dfda3164.scope - libcontainer container 3a4e2c619f978ba8e939f0b01b1677ccf8c6c3ef91653c69f6f0c063dfda3164. Mar 13 00:57:42.220981 containerd[1550]: time="2026-03-13T00:57:42.220933756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8ldtr,Uid:6d071bf0-0e16-4f1a-8e8e-24dc3e39e371,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a4e2c619f978ba8e939f0b01b1677ccf8c6c3ef91653c69f6f0c063dfda3164\"" Mar 13 00:57:42.223053 kubelet[2688]: E0313 00:57:42.222651 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:42.226427 containerd[1550]: time="2026-03-13T00:57:42.226350600Z" level=info msg="CreateContainer within sandbox \"3a4e2c619f978ba8e939f0b01b1677ccf8c6c3ef91653c69f6f0c063dfda3164\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:57:42.242152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3754705141.mount: Deactivated successfully. Mar 13 00:57:42.246313 containerd[1550]: time="2026-03-13T00:57:42.245438489Z" level=info msg="Container 98ed055b4c2fa0ce3c3c8a745727552411cf966c93d687540152f927a028f8ab: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:57:42.250016 containerd[1550]: time="2026-03-13T00:57:42.249973730Z" level=info msg="CreateContainer within sandbox \"3a4e2c619f978ba8e939f0b01b1677ccf8c6c3ef91653c69f6f0c063dfda3164\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"98ed055b4c2fa0ce3c3c8a745727552411cf966c93d687540152f927a028f8ab\"" Mar 13 00:57:42.251183 containerd[1550]: time="2026-03-13T00:57:42.251146210Z" level=info msg="StartContainer for \"98ed055b4c2fa0ce3c3c8a745727552411cf966c93d687540152f927a028f8ab\"" Mar 13 00:57:42.252560 containerd[1550]: time="2026-03-13T00:57:42.252456349Z" level=info msg="connecting to shim 98ed055b4c2fa0ce3c3c8a745727552411cf966c93d687540152f927a028f8ab" address="unix:///run/containerd/s/fbebd196654a5797ba04e20edff8ff190b05b058660af052d53ed9d67d56d9da" protocol=ttrpc version=3 Mar 13 00:57:42.277427 systemd[1]: Started cri-containerd-98ed055b4c2fa0ce3c3c8a745727552411cf966c93d687540152f927a028f8ab.scope - libcontainer container 98ed055b4c2fa0ce3c3c8a745727552411cf966c93d687540152f927a028f8ab. Mar 13 00:57:42.314345 containerd[1550]: time="2026-03-13T00:57:42.314183707Z" level=info msg="StartContainer for \"98ed055b4c2fa0ce3c3c8a745727552411cf966c93d687540152f927a028f8ab\" returns successfully" Mar 13 00:57:43.147026 kubelet[2688]: E0313 00:57:43.146748 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:43.159484 kubelet[2688]: I0313 00:57:43.159438 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8ldtr" podStartSLOduration=18.159422438 podStartE2EDuration="18.159422438s" podCreationTimestamp="2026-03-13 00:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:57:43.159315699 +0000 UTC m=+23.238712850" watchObservedRunningTime="2026-03-13 00:57:43.159422438 +0000 UTC m=+23.238819589" Mar 13 00:57:43.468589 systemd-networkd[1442]: vethebfaa740: Gained IPv6LL Mar 13 00:57:43.788750 systemd-networkd[1442]: cni0: Gained IPv6LL Mar 13 00:57:44.041541 kubelet[2688]: E0313 00:57:44.041432 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:44.042429 containerd[1550]: time="2026-03-13T00:57:44.042393404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fwnrz,Uid:2007d545-7497-4f8a-87a0-bd161c73bd1a,Namespace:kube-system,Attempt:0,}" Mar 13 00:57:44.063250 systemd-networkd[1442]: veth49501221: Link UP Mar 13 00:57:44.068405 kernel: cni0: port 2(veth49501221) entered blocking state Mar 13 00:57:44.068483 kernel: cni0: port 2(veth49501221) entered disabled state Mar 13 00:57:44.069994 kernel: veth49501221: entered allmulticast mode Mar 13 00:57:44.071764 kernel: veth49501221: entered promiscuous mode Mar 13 00:57:44.080220 kernel: cni0: port 2(veth49501221) entered blocking state Mar 13 00:57:44.080252 kernel: cni0: port 2(veth49501221) entered forwarding state Mar 13 00:57:44.081049 systemd-networkd[1442]: veth49501221: Gained carrier Mar 13 00:57:44.085076 containerd[1550]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00008a950), "name":"cbr0", "type":"bridge"} Mar 13 00:57:44.085076 containerd[1550]: delegateAdd: netconf sent to delegate plugin: Mar 13 00:57:44.108918 containerd[1550]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-03-13T00:57:44.108872630Z" level=info msg="connecting to shim fb905c50a24bdede6876135865bee3f5e6e78c37acf9bbdc1121349e8e53f49a" address="unix:///run/containerd/s/2359117e06f86a19ef2feddfb562195260d0aa4b72483bbfc33c545ffc545bc4" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:57:44.135408 systemd[1]: Started cri-containerd-fb905c50a24bdede6876135865bee3f5e6e78c37acf9bbdc1121349e8e53f49a.scope - libcontainer container fb905c50a24bdede6876135865bee3f5e6e78c37acf9bbdc1121349e8e53f49a. Mar 13 00:57:44.148690 kubelet[2688]: E0313 00:57:44.148661 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:44.191789 containerd[1550]: time="2026-03-13T00:57:44.191751093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fwnrz,Uid:2007d545-7497-4f8a-87a0-bd161c73bd1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb905c50a24bdede6876135865bee3f5e6e78c37acf9bbdc1121349e8e53f49a\"" Mar 13 00:57:44.193244 kubelet[2688]: E0313 00:57:44.193223 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:44.198000 containerd[1550]: time="2026-03-13T00:57:44.197969387Z" level=info msg="CreateContainer within sandbox \"fb905c50a24bdede6876135865bee3f5e6e78c37acf9bbdc1121349e8e53f49a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:57:44.210986 containerd[1550]: time="2026-03-13T00:57:44.210961050Z" level=info msg="Container 887611fc698f1b7f00616238ae07488178b9973810f017c4cc6c05961603b087: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:57:44.222124 containerd[1550]: time="2026-03-13T00:57:44.221803420Z" level=info msg="CreateContainer within sandbox \"fb905c50a24bdede6876135865bee3f5e6e78c37acf9bbdc1121349e8e53f49a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"887611fc698f1b7f00616238ae07488178b9973810f017c4cc6c05961603b087\"" Mar 13 00:57:44.223362 containerd[1550]: time="2026-03-13T00:57:44.223341178Z" level=info msg="StartContainer for \"887611fc698f1b7f00616238ae07488178b9973810f017c4cc6c05961603b087\"" Mar 13 00:57:44.224417 containerd[1550]: time="2026-03-13T00:57:44.224359351Z" level=info msg="connecting to shim 887611fc698f1b7f00616238ae07488178b9973810f017c4cc6c05961603b087" address="unix:///run/containerd/s/2359117e06f86a19ef2feddfb562195260d0aa4b72483bbfc33c545ffc545bc4" protocol=ttrpc version=3 Mar 13 00:57:44.252423 systemd[1]: Started cri-containerd-887611fc698f1b7f00616238ae07488178b9973810f017c4cc6c05961603b087.scope - libcontainer container 887611fc698f1b7f00616238ae07488178b9973810f017c4cc6c05961603b087. Mar 13 00:57:44.282819 containerd[1550]: time="2026-03-13T00:57:44.282773306Z" level=info msg="StartContainer for \"887611fc698f1b7f00616238ae07488178b9973810f017c4cc6c05961603b087\" returns successfully" Mar 13 00:57:45.151825 kubelet[2688]: E0313 00:57:45.151794 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:45.152479 kubelet[2688]: E0313 00:57:45.152461 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:45.165485 kubelet[2688]: I0313 00:57:45.165272 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-fwnrz" podStartSLOduration=20.16525705 podStartE2EDuration="20.16525705s" podCreationTimestamp="2026-03-13 00:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:57:45.164405876 +0000 UTC m=+25.243803027" watchObservedRunningTime="2026-03-13 00:57:45.16525705 +0000 UTC m=+25.244654201" Mar 13 00:57:46.028556 systemd-networkd[1442]: veth49501221: Gained IPv6LL Mar 13 00:57:46.152704 kubelet[2688]: E0313 00:57:46.152678 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:57:47.154608 kubelet[2688]: E0313 00:57:47.154579 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:58:33.039922 kubelet[2688]: E0313 00:58:33.039807 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:58:35.039658 kubelet[2688]: E0313 00:58:35.039624 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:58:42.040255 kubelet[2688]: E0313 00:58:42.039864 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:58:47.610167 systemd[1]: Started sshd@5-172.236.110.174:22-68.220.241.50:48386.service - OpenSSH per-connection server daemon (68.220.241.50:48386). Mar 13 00:58:47.758818 sshd[3835]: Accepted publickey for core from 68.220.241.50 port 48386 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:58:47.760424 sshd-session[3835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:58:47.765446 systemd-logind[1530]: New session 6 of user core. Mar 13 00:58:47.771424 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 13 00:58:47.886058 sshd[3838]: Connection closed by 68.220.241.50 port 48386 Mar 13 00:58:47.887486 sshd-session[3835]: pam_unix(sshd:session): session closed for user core Mar 13 00:58:47.891674 systemd[1]: sshd@5-172.236.110.174:22-68.220.241.50:48386.service: Deactivated successfully. Mar 13 00:58:47.894073 systemd[1]: session-6.scope: Deactivated successfully. Mar 13 00:58:47.894855 systemd-logind[1530]: Session 6 logged out. Waiting for processes to exit. Mar 13 00:58:47.896471 systemd-logind[1530]: Removed session 6. Mar 13 00:58:49.040115 kubelet[2688]: E0313 00:58:49.040081 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:58:52.040610 kubelet[2688]: E0313 00:58:52.040248 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:58:52.925619 systemd[1]: Started sshd@6-172.236.110.174:22-68.220.241.50:40372.service - OpenSSH per-connection server daemon (68.220.241.50:40372). Mar 13 00:58:53.073083 sshd[3871]: Accepted publickey for core from 68.220.241.50 port 40372 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:58:53.074637 sshd-session[3871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:58:53.079329 systemd-logind[1530]: New session 7 of user core. Mar 13 00:58:53.086405 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 13 00:58:53.203124 sshd[3874]: Connection closed by 68.220.241.50 port 40372 Mar 13 00:58:53.204525 sshd-session[3871]: pam_unix(sshd:session): session closed for user core Mar 13 00:58:53.210252 systemd-logind[1530]: Session 7 logged out. Waiting for processes to exit. Mar 13 00:58:53.210881 systemd[1]: sshd@6-172.236.110.174:22-68.220.241.50:40372.service: Deactivated successfully. Mar 13 00:58:53.213512 systemd[1]: session-7.scope: Deactivated successfully. Mar 13 00:58:53.216469 systemd-logind[1530]: Removed session 7. Mar 13 00:58:55.040256 kubelet[2688]: E0313 00:58:55.040194 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:58:58.237837 systemd[1]: Started sshd@7-172.236.110.174:22-68.220.241.50:40374.service - OpenSSH per-connection server daemon (68.220.241.50:40374). Mar 13 00:58:58.387446 sshd[3909]: Accepted publickey for core from 68.220.241.50 port 40374 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:58:58.389660 sshd-session[3909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:58:58.397376 systemd-logind[1530]: New session 8 of user core. Mar 13 00:58:58.404449 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 13 00:58:58.528966 sshd[3912]: Connection closed by 68.220.241.50 port 40374 Mar 13 00:58:58.530537 sshd-session[3909]: pam_unix(sshd:session): session closed for user core Mar 13 00:58:58.535202 systemd[1]: sshd@7-172.236.110.174:22-68.220.241.50:40374.service: Deactivated successfully. Mar 13 00:58:58.538883 systemd[1]: session-8.scope: Deactivated successfully. Mar 13 00:58:58.540709 systemd-logind[1530]: Session 8 logged out. Waiting for processes to exit. Mar 13 00:58:58.542992 systemd-logind[1530]: Removed session 8. Mar 13 00:58:58.562019 systemd[1]: Started sshd@8-172.236.110.174:22-68.220.241.50:40378.service - OpenSSH per-connection server daemon (68.220.241.50:40378). Mar 13 00:58:58.714338 sshd[3925]: Accepted publickey for core from 68.220.241.50 port 40378 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:58:58.715108 sshd-session[3925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:58:58.721539 systemd-logind[1530]: New session 9 of user core. Mar 13 00:58:58.724686 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 13 00:58:58.883147 sshd[3928]: Connection closed by 68.220.241.50 port 40378 Mar 13 00:58:58.886500 sshd-session[3925]: pam_unix(sshd:session): session closed for user core Mar 13 00:58:58.892261 systemd-logind[1530]: Session 9 logged out. Waiting for processes to exit. Mar 13 00:58:58.893773 systemd[1]: sshd@8-172.236.110.174:22-68.220.241.50:40378.service: Deactivated successfully. Mar 13 00:58:58.897126 systemd[1]: session-9.scope: Deactivated successfully. Mar 13 00:58:58.901714 systemd-logind[1530]: Removed session 9. Mar 13 00:58:58.921108 systemd[1]: Started sshd@9-172.236.110.174:22-68.220.241.50:40394.service - OpenSSH per-connection server daemon (68.220.241.50:40394). Mar 13 00:58:59.081365 sshd[3938]: Accepted publickey for core from 68.220.241.50 port 40394 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:58:59.083601 sshd-session[3938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:58:59.089786 systemd-logind[1530]: New session 10 of user core. Mar 13 00:58:59.094477 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 13 00:58:59.223392 sshd[3941]: Connection closed by 68.220.241.50 port 40394 Mar 13 00:58:59.224751 sshd-session[3938]: pam_unix(sshd:session): session closed for user core Mar 13 00:58:59.231999 systemd-logind[1530]: Session 10 logged out. Waiting for processes to exit. Mar 13 00:58:59.232490 systemd[1]: sshd@9-172.236.110.174:22-68.220.241.50:40394.service: Deactivated successfully. Mar 13 00:58:59.235038 systemd[1]: session-10.scope: Deactivated successfully. Mar 13 00:58:59.237917 systemd-logind[1530]: Removed session 10. Mar 13 00:59:04.039453 systemd[1]: Started sshd@10-172.236.110.174:22-34.226.195.14:51292.service - OpenSSH per-connection server daemon (34.226.195.14:51292). Mar 13 00:59:04.263800 systemd[1]: Started sshd@11-172.236.110.174:22-68.220.241.50:42598.service - OpenSSH per-connection server daemon (68.220.241.50:42598). Mar 13 00:59:04.428333 sshd[3976]: Accepted publickey for core from 68.220.241.50 port 42598 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:59:04.430301 sshd-session[3976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:59:04.436485 systemd-logind[1530]: New session 11 of user core. Mar 13 00:59:04.442424 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 13 00:59:04.561893 sshd[3979]: Connection closed by 68.220.241.50 port 42598 Mar 13 00:59:04.563492 sshd-session[3976]: pam_unix(sshd:session): session closed for user core Mar 13 00:59:04.568966 systemd-logind[1530]: Session 11 logged out. Waiting for processes to exit. Mar 13 00:59:04.569187 systemd[1]: sshd@11-172.236.110.174:22-68.220.241.50:42598.service: Deactivated successfully. Mar 13 00:59:04.571864 systemd[1]: session-11.scope: Deactivated successfully. Mar 13 00:59:04.573556 systemd-logind[1530]: Removed session 11. Mar 13 00:59:04.588553 systemd[1]: Started sshd@12-172.236.110.174:22-68.220.241.50:42614.service - OpenSSH per-connection server daemon (68.220.241.50:42614). Mar 13 00:59:04.732772 sshd[3991]: Accepted publickey for core from 68.220.241.50 port 42614 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:59:04.734182 sshd-session[3991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:59:04.740042 systemd-logind[1530]: New session 12 of user core. Mar 13 00:59:04.746431 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 13 00:59:04.871868 sshd[3994]: Connection closed by 68.220.241.50 port 42614 Mar 13 00:59:04.872533 sshd-session[3991]: pam_unix(sshd:session): session closed for user core Mar 13 00:59:04.877323 systemd[1]: sshd@12-172.236.110.174:22-68.220.241.50:42614.service: Deactivated successfully. Mar 13 00:59:04.879518 systemd[1]: session-12.scope: Deactivated successfully. Mar 13 00:59:04.880426 systemd-logind[1530]: Session 12 logged out. Waiting for processes to exit. Mar 13 00:59:04.882133 systemd-logind[1530]: Removed session 12. Mar 13 00:59:04.899457 systemd[1]: Started sshd@13-172.236.110.174:22-68.220.241.50:42620.service - OpenSSH per-connection server daemon (68.220.241.50:42620). Mar 13 00:59:05.040650 sshd[4004]: Accepted publickey for core from 68.220.241.50 port 42620 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:59:05.042223 sshd-session[4004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:59:05.047604 systemd-logind[1530]: New session 13 of user core. Mar 13 00:59:05.052959 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 13 00:59:05.585027 sshd[4007]: Connection closed by 68.220.241.50 port 42620 Mar 13 00:59:05.585680 sshd-session[4004]: pam_unix(sshd:session): session closed for user core Mar 13 00:59:05.593817 systemd[1]: sshd@13-172.236.110.174:22-68.220.241.50:42620.service: Deactivated successfully. Mar 13 00:59:05.600158 systemd[1]: session-13.scope: Deactivated successfully. Mar 13 00:59:05.602156 systemd-logind[1530]: Session 13 logged out. Waiting for processes to exit. Mar 13 00:59:05.613834 systemd-logind[1530]: Removed session 13. Mar 13 00:59:05.614691 systemd[1]: Started sshd@14-172.236.110.174:22-68.220.241.50:42636.service - OpenSSH per-connection server daemon (68.220.241.50:42636). Mar 13 00:59:05.760089 sshd[4029]: Accepted publickey for core from 68.220.241.50 port 42636 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:59:05.762031 sshd-session[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:59:05.768270 systemd-logind[1530]: New session 14 of user core. Mar 13 00:59:05.776542 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 13 00:59:05.999780 sshd[4034]: Connection closed by 68.220.241.50 port 42636 Mar 13 00:59:06.000736 sshd-session[4029]: pam_unix(sshd:session): session closed for user core Mar 13 00:59:06.006424 systemd[1]: sshd@14-172.236.110.174:22-68.220.241.50:42636.service: Deactivated successfully. Mar 13 00:59:06.009197 systemd[1]: session-14.scope: Deactivated successfully. Mar 13 00:59:06.010758 systemd-logind[1530]: Session 14 logged out. Waiting for processes to exit. Mar 13 00:59:06.012475 systemd-logind[1530]: Removed session 14. Mar 13 00:59:06.030138 systemd[1]: Started sshd@15-172.236.110.174:22-68.220.241.50:42640.service - OpenSSH per-connection server daemon (68.220.241.50:42640). Mar 13 00:59:06.179864 sshd[4055]: Accepted publickey for core from 68.220.241.50 port 42640 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:59:06.181633 sshd-session[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:59:06.186857 systemd-logind[1530]: New session 15 of user core. Mar 13 00:59:06.191424 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 13 00:59:06.303821 sshd[4058]: Connection closed by 68.220.241.50 port 42640 Mar 13 00:59:06.304594 sshd-session[4055]: pam_unix(sshd:session): session closed for user core Mar 13 00:59:06.310783 systemd[1]: sshd@15-172.236.110.174:22-68.220.241.50:42640.service: Deactivated successfully. Mar 13 00:59:06.313274 systemd[1]: session-15.scope: Deactivated successfully. Mar 13 00:59:06.314613 systemd-logind[1530]: Session 15 logged out. Waiting for processes to exit. Mar 13 00:59:06.317203 systemd-logind[1530]: Removed session 15. Mar 13 00:59:07.590055 sshd[3973]: Connection closed by 34.226.195.14 port 51292 [preauth] Mar 13 00:59:07.592516 systemd[1]: sshd@10-172.236.110.174:22-34.226.195.14:51292.service: Deactivated successfully. Mar 13 00:59:11.040913 kubelet[2688]: E0313 00:59:11.040379 2688 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Mar 13 00:59:11.340486 systemd[1]: Started sshd@16-172.236.110.174:22-68.220.241.50:42650.service - OpenSSH per-connection server daemon (68.220.241.50:42650). Mar 13 00:59:11.485747 sshd[4096]: Accepted publickey for core from 68.220.241.50 port 42650 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:59:11.487612 sshd-session[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:59:11.493398 systemd-logind[1530]: New session 16 of user core. Mar 13 00:59:11.499448 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 13 00:59:11.614213 sshd[4099]: Connection closed by 68.220.241.50 port 42650 Mar 13 00:59:11.615571 sshd-session[4096]: pam_unix(sshd:session): session closed for user core Mar 13 00:59:11.621521 systemd[1]: sshd@16-172.236.110.174:22-68.220.241.50:42650.service: Deactivated successfully. Mar 13 00:59:11.624507 systemd[1]: session-16.scope: Deactivated successfully. Mar 13 00:59:11.625784 systemd-logind[1530]: Session 16 logged out. Waiting for processes to exit. Mar 13 00:59:11.628145 systemd-logind[1530]: Removed session 16. Mar 13 00:59:16.651335 systemd[1]: Started sshd@17-172.236.110.174:22-68.220.241.50:41608.service - OpenSSH per-connection server daemon (68.220.241.50:41608). Mar 13 00:59:16.807974 sshd[4131]: Accepted publickey for core from 68.220.241.50 port 41608 ssh2: RSA SHA256:jThJ6o3Oo9A3ZW+r2FILu+HRnrX8w4wWWQFqEKF1D2U Mar 13 00:59:16.809319 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:59:16.814842 systemd-logind[1530]: New session 17 of user core. Mar 13 00:59:16.822415 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 13 00:59:16.938621 sshd[4134]: Connection closed by 68.220.241.50 port 41608 Mar 13 00:59:16.940334 sshd-session[4131]: pam_unix(sshd:session): session closed for user core Mar 13 00:59:16.945317 systemd-logind[1530]: Session 17 logged out. Waiting for processes to exit. Mar 13 00:59:16.946586 systemd[1]: sshd@17-172.236.110.174:22-68.220.241.50:41608.service: Deactivated successfully. Mar 13 00:59:16.949273 systemd[1]: session-17.scope: Deactivated successfully. Mar 13 00:59:16.951063 systemd-logind[1530]: Removed session 17.