Apr 24 00:16:03.947411 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Apr 23 22:08:58 -00 2026 Apr 24 00:16:03.947446 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=35bf60e399c7fbdab9d27e362bd719e7cadd795a3fa26a4f30de01ccc70fba7e Apr 24 00:16:03.947457 kernel: BIOS-provided physical RAM map: Apr 24 00:16:03.947463 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Apr 24 00:16:03.947469 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Apr 24 00:16:03.947475 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 24 00:16:03.947485 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Apr 24 00:16:03.947491 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Apr 24 00:16:03.947497 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 24 00:16:03.947503 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 24 00:16:03.947509 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 24 00:16:03.947515 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 24 00:16:03.947521 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Apr 24 00:16:03.947527 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 24 00:16:03.947536 kernel: NX (Execute Disable) protection: active Apr 24 00:16:03.947543 kernel: APIC: Static calls initialized Apr 24 00:16:03.947549 kernel: SMBIOS 2.8 present. Apr 24 00:16:03.947556 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Apr 24 00:16:03.947562 kernel: DMI: Memory slots populated: 1/1 Apr 24 00:16:03.947568 kernel: Hypervisor detected: KVM Apr 24 00:16:03.947576 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Apr 24 00:16:03.947583 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 24 00:16:03.947589 kernel: kvm-clock: using sched offset of 7441296702 cycles Apr 24 00:16:03.947596 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 24 00:16:03.947603 kernel: tsc: Detected 1999.998 MHz processor Apr 24 00:16:03.947609 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 24 00:16:03.947616 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 24 00:16:03.947623 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Apr 24 00:16:03.947630 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 24 00:16:03.947638 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 24 00:16:03.947645 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Apr 24 00:16:03.947651 kernel: Using GB pages for direct mapping Apr 24 00:16:03.947657 kernel: ACPI: Early table checksum verification disabled Apr 24 00:16:03.947664 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Apr 24 00:16:03.947670 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:16:03.947677 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:16:03.947683 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:16:03.949729 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 24 00:16:03.949738 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:16:03.949750 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:16:03.949760 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:16:03.949767 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:16:03.949774 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Apr 24 00:16:03.949781 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Apr 24 00:16:03.949790 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 24 00:16:03.949797 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Apr 24 00:16:03.949803 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Apr 24 00:16:03.949810 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Apr 24 00:16:03.949817 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Apr 24 00:16:03.949824 kernel: No NUMA configuration found Apr 24 00:16:03.949830 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Apr 24 00:16:03.949837 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Apr 24 00:16:03.949844 kernel: Zone ranges: Apr 24 00:16:03.949854 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 24 00:16:03.949860 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 24 00:16:03.949867 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Apr 24 00:16:03.949874 kernel: Device empty Apr 24 00:16:03.949881 kernel: Movable zone start for each node Apr 24 00:16:03.949887 kernel: Early memory node ranges Apr 24 00:16:03.949894 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 24 00:16:03.949901 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Apr 24 00:16:03.949908 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Apr 24 00:16:03.949917 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Apr 24 00:16:03.949924 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 24 00:16:03.949930 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 24 00:16:03.949937 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Apr 24 00:16:03.949944 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 24 00:16:03.949951 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 24 00:16:03.949957 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 24 00:16:03.949964 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 24 00:16:03.949971 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 24 00:16:03.949980 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 24 00:16:03.949986 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 24 00:16:03.949993 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 24 00:16:03.950000 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 24 00:16:03.950007 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 24 00:16:03.950013 kernel: TSC deadline timer available Apr 24 00:16:03.950020 kernel: CPU topo: Max. logical packages: 1 Apr 24 00:16:03.950027 kernel: CPU topo: Max. logical dies: 1 Apr 24 00:16:03.950033 kernel: CPU topo: Max. dies per package: 1 Apr 24 00:16:03.950040 kernel: CPU topo: Max. threads per core: 1 Apr 24 00:16:03.950049 kernel: CPU topo: Num. cores per package: 2 Apr 24 00:16:03.950055 kernel: CPU topo: Num. threads per package: 2 Apr 24 00:16:03.950062 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Apr 24 00:16:03.950068 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 24 00:16:03.950075 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 24 00:16:03.950082 kernel: kvm-guest: setup PV sched yield Apr 24 00:16:03.950089 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 24 00:16:03.950095 kernel: Booting paravirtualized kernel on KVM Apr 24 00:16:03.950102 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 24 00:16:03.950111 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 24 00:16:03.950118 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u1048576 Apr 24 00:16:03.950125 kernel: pcpu-alloc: s207448 r8192 d30120 u1048576 alloc=1*2097152 Apr 24 00:16:03.950132 kernel: pcpu-alloc: [0] 0 1 Apr 24 00:16:03.950138 kernel: kvm-guest: PV spinlocks enabled Apr 24 00:16:03.950145 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 24 00:16:03.950153 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=35bf60e399c7fbdab9d27e362bd719e7cadd795a3fa26a4f30de01ccc70fba7e Apr 24 00:16:03.950165 kernel: random: crng init done Apr 24 00:16:03.950179 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 24 00:16:03.950191 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 24 00:16:03.950202 kernel: Fallback order for Node 0: 0 Apr 24 00:16:03.950218 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Apr 24 00:16:03.950259 kernel: Policy zone: Normal Apr 24 00:16:03.950281 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 00:16:03.950293 kernel: software IO TLB: area num 2. Apr 24 00:16:03.950304 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 24 00:16:03.950316 kernel: ftrace: allocating 40126 entries in 157 pages Apr 24 00:16:03.950328 kernel: ftrace: allocated 157 pages with 5 groups Apr 24 00:16:03.950335 kernel: Dynamic Preempt: voluntary Apr 24 00:16:03.950342 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 00:16:03.950349 kernel: rcu: RCU event tracing is enabled. Apr 24 00:16:03.950356 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 24 00:16:03.950363 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 00:16:03.950371 kernel: Rude variant of Tasks RCU enabled. Apr 24 00:16:03.950377 kernel: Tracing variant of Tasks RCU enabled. Apr 24 00:16:03.950384 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 00:16:03.950394 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 24 00:16:03.950401 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 00:16:03.950414 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 00:16:03.950424 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 00:16:03.950431 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 24 00:16:03.950438 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 00:16:03.950446 kernel: Console: colour VGA+ 80x25 Apr 24 00:16:03.950453 kernel: printk: legacy console [tty0] enabled Apr 24 00:16:03.950473 kernel: printk: legacy console [ttyS0] enabled Apr 24 00:16:03.950490 kernel: ACPI: Core revision 20240827 Apr 24 00:16:03.950500 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 24 00:16:03.950507 kernel: APIC: Switch to symmetric I/O mode setup Apr 24 00:16:03.950514 kernel: x2apic enabled Apr 24 00:16:03.950521 kernel: APIC: Switched APIC routing to: physical x2apic Apr 24 00:16:03.950528 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 24 00:16:03.950536 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 24 00:16:03.950543 kernel: kvm-guest: setup PV IPIs Apr 24 00:16:03.950552 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 24 00:16:03.950559 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a8595ce59, max_idle_ns: 881590778713 ns Apr 24 00:16:03.950566 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999998) Apr 24 00:16:03.950573 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 24 00:16:03.950581 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 24 00:16:03.950588 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 24 00:16:03.950595 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 24 00:16:03.950602 kernel: Spectre V2 : Mitigation: Retpolines Apr 24 00:16:03.950609 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 24 00:16:03.950619 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 24 00:16:03.950626 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 24 00:16:03.950633 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 24 00:16:03.950640 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 24 00:16:03.950648 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 24 00:16:03.950655 kernel: active return thunk: srso_alias_return_thunk Apr 24 00:16:03.950662 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 24 00:16:03.950669 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 24 00:16:03.950678 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 00:16:03.951718 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 24 00:16:03.951733 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 24 00:16:03.951741 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 24 00:16:03.951748 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 24 00:16:03.951755 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 24 00:16:03.951762 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Apr 24 00:16:03.951770 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Apr 24 00:16:03.951777 kernel: Freeing SMP alternatives memory: 32K Apr 24 00:16:03.951788 kernel: pid_max: default: 32768 minimum: 301 Apr 24 00:16:03.951796 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 24 00:16:03.951803 kernel: landlock: Up and running. Apr 24 00:16:03.951810 kernel: SELinux: Initializing. Apr 24 00:16:03.951817 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 00:16:03.951825 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 00:16:03.951832 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Apr 24 00:16:03.951840 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 24 00:16:03.951847 kernel: ... version: 0 Apr 24 00:16:03.951856 kernel: ... bit width: 48 Apr 24 00:16:03.951863 kernel: ... generic registers: 6 Apr 24 00:16:03.951870 kernel: ... value mask: 0000ffffffffffff Apr 24 00:16:03.951877 kernel: ... max period: 00007fffffffffff Apr 24 00:16:03.951884 kernel: ... fixed-purpose events: 0 Apr 24 00:16:03.951891 kernel: ... event mask: 000000000000003f Apr 24 00:16:03.951898 kernel: signal: max sigframe size: 3376 Apr 24 00:16:03.951905 kernel: rcu: Hierarchical SRCU implementation. Apr 24 00:16:03.951913 kernel: rcu: Max phase no-delay instances is 400. Apr 24 00:16:03.951922 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 24 00:16:03.951929 kernel: smp: Bringing up secondary CPUs ... Apr 24 00:16:03.951936 kernel: smpboot: x86: Booting SMP configuration: Apr 24 00:16:03.951943 kernel: .... node #0, CPUs: #1 Apr 24 00:16:03.951950 kernel: smp: Brought up 1 node, 2 CPUs Apr 24 00:16:03.951957 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Apr 24 00:16:03.951965 kernel: Memory: 3953608K/4193772K available (14336K kernel code, 2453K rwdata, 26076K rodata, 46224K init, 2524K bss, 235480K reserved, 0K cma-reserved) Apr 24 00:16:03.951972 kernel: devtmpfs: initialized Apr 24 00:16:03.951979 kernel: x86/mm: Memory block size: 128MB Apr 24 00:16:03.951989 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 00:16:03.951996 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 24 00:16:03.952003 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 00:16:03.952010 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 00:16:03.952017 kernel: audit: initializing netlink subsys (disabled) Apr 24 00:16:03.952024 kernel: audit: type=2000 audit(1776989760.800:1): state=initialized audit_enabled=0 res=1 Apr 24 00:16:03.952031 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 00:16:03.952038 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 24 00:16:03.952045 kernel: cpuidle: using governor menu Apr 24 00:16:03.952054 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 00:16:03.952061 kernel: dca service started, version 1.12.1 Apr 24 00:16:03.952068 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Apr 24 00:16:03.952075 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 24 00:16:03.952082 kernel: PCI: Using configuration type 1 for base access Apr 24 00:16:03.952090 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 24 00:16:03.952097 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 00:16:03.952104 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 00:16:03.952111 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 00:16:03.952120 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 00:16:03.952127 kernel: ACPI: Added _OSI(Module Device) Apr 24 00:16:03.952134 kernel: ACPI: Added _OSI(Processor Device) Apr 24 00:16:03.952141 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 00:16:03.952148 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 24 00:16:03.952155 kernel: ACPI: Interpreter enabled Apr 24 00:16:03.952162 kernel: ACPI: PM: (supports S0 S3 S5) Apr 24 00:16:03.952169 kernel: ACPI: Using IOAPIC for interrupt routing Apr 24 00:16:03.952177 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 24 00:16:03.952186 kernel: PCI: Using E820 reservations for host bridge windows Apr 24 00:16:03.952193 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 24 00:16:03.952200 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 24 00:16:03.953078 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 24 00:16:03.953279 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 24 00:16:03.953450 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 24 00:16:03.953461 kernel: PCI host bridge to bus 0000:00 Apr 24 00:16:03.953595 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 24 00:16:03.953807 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 24 00:16:03.953933 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 24 00:16:03.954046 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 24 00:16:03.954157 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 24 00:16:03.955218 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Apr 24 00:16:03.955342 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 24 00:16:03.955493 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 24 00:16:03.955635 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 24 00:16:03.956806 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Apr 24 00:16:03.956937 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Apr 24 00:16:03.957060 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Apr 24 00:16:03.957180 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 24 00:16:03.957319 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Apr 24 00:16:03.957442 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Apr 24 00:16:03.957596 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Apr 24 00:16:03.957742 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Apr 24 00:16:03.957877 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 24 00:16:03.958000 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Apr 24 00:16:03.958120 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Apr 24 00:16:03.958247 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Apr 24 00:16:03.958367 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Apr 24 00:16:03.958495 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 24 00:16:03.958616 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 24 00:16:03.960619 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 24 00:16:03.962797 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Apr 24 00:16:03.962945 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Apr 24 00:16:03.963083 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 24 00:16:03.963206 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Apr 24 00:16:03.963216 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 24 00:16:03.963224 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 24 00:16:03.963231 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 24 00:16:03.963239 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 24 00:16:03.963246 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 24 00:16:03.963257 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 24 00:16:03.963264 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 24 00:16:03.963271 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 24 00:16:03.963278 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 24 00:16:03.963285 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 24 00:16:03.963292 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 24 00:16:03.963299 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 24 00:16:03.963306 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 24 00:16:03.963313 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 24 00:16:03.963323 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 24 00:16:03.963330 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 24 00:16:03.963337 kernel: iommu: Default domain type: Translated Apr 24 00:16:03.963344 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 24 00:16:03.963351 kernel: PCI: Using ACPI for IRQ routing Apr 24 00:16:03.963359 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 24 00:16:03.963366 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Apr 24 00:16:03.963373 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Apr 24 00:16:03.963493 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 24 00:16:03.963619 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 24 00:16:03.963771 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 24 00:16:03.963782 kernel: vgaarb: loaded Apr 24 00:16:03.963789 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 24 00:16:03.963797 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 24 00:16:03.963804 kernel: clocksource: Switched to clocksource kvm-clock Apr 24 00:16:03.963811 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 00:16:03.963819 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 00:16:03.963830 kernel: pnp: PnP ACPI init Apr 24 00:16:03.963962 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 24 00:16:03.963973 kernel: pnp: PnP ACPI: found 5 devices Apr 24 00:16:03.963985 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 24 00:16:03.963997 kernel: NET: Registered PF_INET protocol family Apr 24 00:16:03.964009 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 24 00:16:03.964020 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 24 00:16:03.964031 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 00:16:03.964043 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 24 00:16:03.964070 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 24 00:16:03.964078 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 24 00:16:03.964085 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 00:16:03.964092 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 00:16:03.964100 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 00:16:03.964107 kernel: NET: Registered PF_XDP protocol family Apr 24 00:16:03.964229 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 24 00:16:03.964343 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 24 00:16:03.964459 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 24 00:16:03.964570 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 24 00:16:03.964681 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 24 00:16:03.964918 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Apr 24 00:16:03.964929 kernel: PCI: CLS 0 bytes, default 64 Apr 24 00:16:03.964936 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 24 00:16:03.964944 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Apr 24 00:16:03.964951 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a8595ce59, max_idle_ns: 881590778713 ns Apr 24 00:16:03.964959 kernel: Initialise system trusted keyrings Apr 24 00:16:03.964970 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 24 00:16:03.964977 kernel: Key type asymmetric registered Apr 24 00:16:03.964984 kernel: Asymmetric key parser 'x509' registered Apr 24 00:16:03.964991 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 24 00:16:03.964998 kernel: io scheduler mq-deadline registered Apr 24 00:16:03.965005 kernel: io scheduler kyber registered Apr 24 00:16:03.965012 kernel: io scheduler bfq registered Apr 24 00:16:03.965019 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 24 00:16:03.965027 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 24 00:16:03.965037 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 24 00:16:03.965044 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 00:16:03.965051 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 24 00:16:03.965058 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 24 00:16:03.965066 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 24 00:16:03.965073 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 24 00:16:03.965080 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 24 00:16:03.965217 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 24 00:16:03.965341 kernel: rtc_cmos 00:03: registered as rtc0 Apr 24 00:16:03.965458 kernel: rtc_cmos 00:03: setting system clock to 2026-04-24T00:16:03 UTC (1776989763) Apr 24 00:16:03.965572 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 24 00:16:03.965582 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 24 00:16:03.965590 kernel: NET: Registered PF_INET6 protocol family Apr 24 00:16:03.965597 kernel: Segment Routing with IPv6 Apr 24 00:16:03.965604 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 00:16:03.965611 kernel: NET: Registered PF_PACKET protocol family Apr 24 00:16:03.965618 kernel: Key type dns_resolver registered Apr 24 00:16:03.965629 kernel: IPI shorthand broadcast: enabled Apr 24 00:16:03.965636 kernel: sched_clock: Marking stable (2941006249, 340778361)->(3372880532, -91095922) Apr 24 00:16:03.965643 kernel: registered taskstats version 1 Apr 24 00:16:03.965650 kernel: Loading compiled-in X.509 certificates Apr 24 00:16:03.965657 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 09f9b319c99eb3f54e68ef799fdb2bce5b238ec0' Apr 24 00:16:03.965664 kernel: Demotion targets for Node 0: null Apr 24 00:16:03.965671 kernel: Key type .fscrypt registered Apr 24 00:16:03.965678 kernel: Key type fscrypt-provisioning registered Apr 24 00:16:03.965701 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 00:16:03.965712 kernel: ima: Allocated hash algorithm: sha1 Apr 24 00:16:03.965719 kernel: ima: No architecture policies found Apr 24 00:16:03.965726 kernel: clk: Disabling unused clocks Apr 24 00:16:03.965733 kernel: Warning: unable to open an initial console. Apr 24 00:16:03.965741 kernel: Freeing unused kernel image (initmem) memory: 46224K Apr 24 00:16:03.965748 kernel: Write protecting the kernel read-only data: 40960k Apr 24 00:16:03.965756 kernel: Freeing unused kernel image (rodata/data gap) memory: 548K Apr 24 00:16:03.965763 kernel: Run /init as init process Apr 24 00:16:03.965772 kernel: with arguments: Apr 24 00:16:03.965779 kernel: /init Apr 24 00:16:03.965787 kernel: with environment: Apr 24 00:16:03.965808 kernel: HOME=/ Apr 24 00:16:03.965818 kernel: TERM=linux Apr 24 00:16:03.967228 systemd[1]: Successfully made /usr/ read-only. Apr 24 00:16:03.967245 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 24 00:16:03.967255 systemd[1]: Detected virtualization kvm. Apr 24 00:16:03.967266 systemd[1]: Detected architecture x86-64. Apr 24 00:16:03.967274 systemd[1]: Running in initrd. Apr 24 00:16:03.967282 systemd[1]: No hostname configured, using default hostname. Apr 24 00:16:03.967290 systemd[1]: Hostname set to . Apr 24 00:16:03.967298 systemd[1]: Initializing machine ID from random generator. Apr 24 00:16:03.967306 systemd[1]: Queued start job for default target initrd.target. Apr 24 00:16:03.967313 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 00:16:03.967321 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 00:16:03.967332 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 00:16:03.967340 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 00:16:03.967351 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 00:16:03.967359 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 00:16:03.967368 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 00:16:03.967376 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 00:16:03.967384 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 00:16:03.967394 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 00:16:03.967402 systemd[1]: Reached target paths.target - Path Units. Apr 24 00:16:03.967410 systemd[1]: Reached target slices.target - Slice Units. Apr 24 00:16:03.967417 systemd[1]: Reached target swap.target - Swaps. Apr 24 00:16:03.967425 systemd[1]: Reached target timers.target - Timer Units. Apr 24 00:16:03.967433 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 00:16:03.967441 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 00:16:03.967448 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 00:16:03.967458 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 24 00:16:03.967466 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 00:16:03.967474 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 00:16:03.967484 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 00:16:03.967492 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 00:16:03.967500 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 00:16:03.967511 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 00:16:03.967519 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 00:16:03.967527 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 24 00:16:03.967535 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 00:16:03.967542 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 00:16:03.967550 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 00:16:03.967558 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 00:16:03.967600 systemd-journald[187]: Collecting audit messages is disabled. Apr 24 00:16:03.967637 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 00:16:03.967656 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 00:16:03.967670 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 00:16:03.967680 systemd-journald[187]: Journal started Apr 24 00:16:03.969375 systemd-journald[187]: Runtime Journal (/run/log/journal/6625573dafcd4cc596f1dd6931e04207) is 8M, max 78.2M, 70.2M free. Apr 24 00:16:03.950513 systemd-modules-load[188]: Inserted module 'overlay' Apr 24 00:16:03.977224 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 00:16:03.982713 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 00:16:03.984240 systemd-modules-load[188]: Inserted module 'br_netfilter' Apr 24 00:16:04.069937 kernel: Bridge firewalling registered Apr 24 00:16:04.092087 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 00:16:04.094330 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 00:16:04.099096 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 00:16:04.101814 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 00:16:04.109381 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 00:16:04.112907 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 00:16:04.122657 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 00:16:04.131797 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 00:16:04.131811 systemd-tmpfiles[207]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 24 00:16:04.135816 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 00:16:04.137634 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 00:16:04.141816 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 00:16:04.146760 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 00:16:04.152812 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 00:16:04.164167 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 00:16:04.167100 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=35bf60e399c7fbdab9d27e362bd719e7cadd795a3fa26a4f30de01ccc70fba7e Apr 24 00:16:04.198235 systemd-resolved[226]: Positive Trust Anchors: Apr 24 00:16:04.199240 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 00:16:04.199272 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 00:16:04.205081 systemd-resolved[226]: Defaulting to hostname 'linux'. Apr 24 00:16:04.206193 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 00:16:04.207370 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 00:16:04.261733 kernel: SCSI subsystem initialized Apr 24 00:16:04.270718 kernel: Loading iSCSI transport class v2.0-870. Apr 24 00:16:04.281717 kernel: iscsi: registered transport (tcp) Apr 24 00:16:04.300835 kernel: iscsi: registered transport (qla4xxx) Apr 24 00:16:04.300879 kernel: QLogic iSCSI HBA Driver Apr 24 00:16:04.320501 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 00:16:04.337802 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 00:16:04.340574 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 00:16:04.386027 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 00:16:04.388130 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 00:16:04.440716 kernel: raid6: avx2x4 gen() 32386 MB/s Apr 24 00:16:04.458713 kernel: raid6: avx2x2 gen() 30704 MB/s Apr 24 00:16:04.476859 kernel: raid6: avx2x1 gen() 19573 MB/s Apr 24 00:16:04.476905 kernel: raid6: using algorithm avx2x4 gen() 32386 MB/s Apr 24 00:16:04.498542 kernel: raid6: .... xor() 4754 MB/s, rmw enabled Apr 24 00:16:04.498585 kernel: raid6: using avx2x2 recovery algorithm Apr 24 00:16:04.518713 kernel: xor: automatically using best checksumming function avx Apr 24 00:16:04.643722 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 00:16:04.650325 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 00:16:04.652725 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 00:16:04.680217 systemd-udevd[434]: Using default interface naming scheme 'v255'. Apr 24 00:16:04.685953 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 00:16:04.689325 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 00:16:04.714702 dracut-pre-trigger[441]: rd.md=0: removing MD RAID activation Apr 24 00:16:04.740148 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 00:16:04.742304 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 00:16:04.817797 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 00:16:04.822816 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 00:16:04.888774 kernel: cryptd: max_cpu_qlen set to 1000 Apr 24 00:16:05.093785 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Apr 24 00:16:05.112915 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 00:16:05.113495 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 00:16:05.119182 kernel: scsi host0: Virtio SCSI HBA Apr 24 00:16:05.161796 kernel: libata version 3.00 loaded. Apr 24 00:16:05.121019 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 00:16:05.174278 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 24 00:16:05.132260 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 00:16:05.175878 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 24 00:16:05.184763 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 24 00:16:05.184795 kernel: AES CTR mode by8 optimization enabled Apr 24 00:16:05.221729 kernel: ahci 0000:00:1f.2: version 3.0 Apr 24 00:16:05.229735 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 24 00:16:05.235733 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 24 00:16:05.235985 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 24 00:16:05.236234 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 24 00:16:05.237714 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 24 00:16:05.238050 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Apr 24 00:16:05.238249 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 24 00:16:05.238403 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 24 00:16:05.239392 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 24 00:16:05.241711 kernel: scsi host1: ahci Apr 24 00:16:05.243734 kernel: scsi host2: ahci Apr 24 00:16:05.243981 kernel: scsi host3: ahci Apr 24 00:16:05.244733 kernel: scsi host4: ahci Apr 24 00:16:05.245088 kernel: scsi host5: ahci Apr 24 00:16:05.245758 kernel: scsi host6: ahci Apr 24 00:16:05.247749 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 1 Apr 24 00:16:05.247782 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 1 Apr 24 00:16:05.247802 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 1 Apr 24 00:16:05.247820 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 1 Apr 24 00:16:05.247844 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 1 Apr 24 00:16:05.247862 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 1 Apr 24 00:16:05.247880 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 24 00:16:05.247896 kernel: GPT:9289727 != 167739391 Apr 24 00:16:05.247913 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 24 00:16:05.247930 kernel: GPT:9289727 != 167739391 Apr 24 00:16:05.247945 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 24 00:16:05.247962 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 00:16:05.247978 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 24 00:16:05.406018 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 00:16:05.556722 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 24 00:16:05.556789 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 24 00:16:05.561841 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 24 00:16:05.562723 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 24 00:16:05.564710 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 24 00:16:05.567723 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 24 00:16:05.641522 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 24 00:16:05.662273 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 24 00:16:05.663592 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 00:16:05.678414 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 24 00:16:05.688173 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 24 00:16:05.689155 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 24 00:16:05.692258 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 00:16:05.693161 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 00:16:05.694944 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 00:16:05.697363 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 00:16:05.701788 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 00:16:05.719617 disk-uuid[611]: Primary Header is updated. Apr 24 00:16:05.719617 disk-uuid[611]: Secondary Entries is updated. Apr 24 00:16:05.719617 disk-uuid[611]: Secondary Header is updated. Apr 24 00:16:05.726208 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 00:16:05.732708 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 00:16:05.741725 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 00:16:06.748717 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 00:16:06.749065 disk-uuid[613]: The operation has completed successfully. Apr 24 00:16:06.799134 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 24 00:16:06.799322 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 24 00:16:06.844734 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 24 00:16:06.857064 sh[633]: Success Apr 24 00:16:06.876721 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 00:16:06.876765 kernel: device-mapper: uevent: version 1.0.3 Apr 24 00:16:06.879261 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 24 00:16:06.892713 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 24 00:16:06.936267 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 24 00:16:06.942779 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 24 00:16:06.967214 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 24 00:16:06.982721 kernel: BTRFS: device fsid b0afcb9a-4dc6-42cc-b61f-b370046a03ca devid 1 transid 32 /dev/mapper/usr (254:0) scanned by mount (645) Apr 24 00:16:06.982764 kernel: BTRFS info (device dm-0): first mount of filesystem b0afcb9a-4dc6-42cc-b61f-b370046a03ca Apr 24 00:16:06.986920 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 24 00:16:07.001324 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Apr 24 00:16:07.001365 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 24 00:16:07.006611 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 24 00:16:07.008217 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 24 00:16:07.009674 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 24 00:16:07.010995 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 24 00:16:07.012853 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 24 00:16:07.015846 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 24 00:16:07.052734 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (680) Apr 24 00:16:07.057302 kernel: BTRFS info (device sda6): first mount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 00:16:07.057351 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 00:16:07.069283 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 24 00:16:07.069321 kernel: BTRFS info (device sda6): turning on async discard Apr 24 00:16:07.069333 kernel: BTRFS info (device sda6): enabling free space tree Apr 24 00:16:07.077750 kernel: BTRFS info (device sda6): last unmount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 00:16:07.079213 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 24 00:16:07.083863 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 24 00:16:07.160207 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 00:16:07.169248 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 00:16:07.199667 ignition[741]: Ignition 2.22.0 Apr 24 00:16:07.199682 ignition[741]: Stage: fetch-offline Apr 24 00:16:07.199751 ignition[741]: no configs at "/usr/lib/ignition/base.d" Apr 24 00:16:07.202336 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 00:16:07.199766 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 00:16:07.199875 ignition[741]: parsed url from cmdline: "" Apr 24 00:16:07.199881 ignition[741]: no config URL provided Apr 24 00:16:07.199889 ignition[741]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 00:16:07.199902 ignition[741]: no config at "/usr/lib/ignition/user.ign" Apr 24 00:16:07.199910 ignition[741]: failed to fetch config: resource requires networking Apr 24 00:16:07.200096 ignition[741]: Ignition finished successfully Apr 24 00:16:07.220296 systemd-networkd[819]: lo: Link UP Apr 24 00:16:07.220309 systemd-networkd[819]: lo: Gained carrier Apr 24 00:16:07.222243 systemd-networkd[819]: Enumeration completed Apr 24 00:16:07.222630 systemd-networkd[819]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 00:16:07.222635 systemd-networkd[819]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 00:16:07.222966 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 00:16:07.224599 systemd[1]: Reached target network.target - Network. Apr 24 00:16:07.225389 systemd-networkd[819]: eth0: Link UP Apr 24 00:16:07.225676 systemd-networkd[819]: eth0: Gained carrier Apr 24 00:16:07.225718 systemd-networkd[819]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 00:16:07.230831 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 24 00:16:07.264757 ignition[823]: Ignition 2.22.0 Apr 24 00:16:07.264777 ignition[823]: Stage: fetch Apr 24 00:16:07.264957 ignition[823]: no configs at "/usr/lib/ignition/base.d" Apr 24 00:16:07.264970 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 00:16:07.265050 ignition[823]: parsed url from cmdline: "" Apr 24 00:16:07.265054 ignition[823]: no config URL provided Apr 24 00:16:07.265060 ignition[823]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 00:16:07.265068 ignition[823]: no config at "/usr/lib/ignition/user.ign" Apr 24 00:16:07.265095 ignition[823]: PUT http://169.254.169.254/v1/token: attempt #1 Apr 24 00:16:07.265497 ignition[823]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 24 00:16:07.465763 ignition[823]: PUT http://169.254.169.254/v1/token: attempt #2 Apr 24 00:16:07.465962 ignition[823]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 24 00:16:07.866595 ignition[823]: PUT http://169.254.169.254/v1/token: attempt #3 Apr 24 00:16:07.866788 ignition[823]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 24 00:16:07.972760 systemd-networkd[819]: eth0: DHCPv4 address 172.234.204.89/24, gateway 172.234.204.1 acquired from 23.205.167.221 Apr 24 00:16:08.659866 systemd-networkd[819]: eth0: Gained IPv6LL Apr 24 00:16:08.666923 ignition[823]: PUT http://169.254.169.254/v1/token: attempt #4 Apr 24 00:16:08.752880 ignition[823]: PUT result: OK Apr 24 00:16:08.752939 ignition[823]: GET http://169.254.169.254/v1/user-data: attempt #1 Apr 24 00:16:08.895829 ignition[823]: GET result: OK Apr 24 00:16:08.901833 unknown[823]: fetched base config from "system" Apr 24 00:16:08.895927 ignition[823]: parsing config with SHA512: 6d9e4b0320075d2db2af1c597643104388219a4f215e32cfb0e7e02224b22041b0cd071775975253c577da86a5b14dd160b63b83c6a8b7db92d4de15d540fe80 Apr 24 00:16:08.901845 unknown[823]: fetched base config from "system" Apr 24 00:16:08.902873 ignition[823]: fetch: fetch complete Apr 24 00:16:08.901872 unknown[823]: fetched user config from "akamai" Apr 24 00:16:08.902886 ignition[823]: fetch: fetch passed Apr 24 00:16:08.917505 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 24 00:16:08.903034 ignition[823]: Ignition finished successfully Apr 24 00:16:08.921808 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 24 00:16:08.949704 ignition[830]: Ignition 2.22.0 Apr 24 00:16:08.949716 ignition[830]: Stage: kargs Apr 24 00:16:08.949844 ignition[830]: no configs at "/usr/lib/ignition/base.d" Apr 24 00:16:08.949856 ignition[830]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 00:16:08.950432 ignition[830]: kargs: kargs passed Apr 24 00:16:08.954734 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 24 00:16:08.950471 ignition[830]: Ignition finished successfully Apr 24 00:16:08.957112 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 24 00:16:08.986894 ignition[836]: Ignition 2.22.0 Apr 24 00:16:08.986909 ignition[836]: Stage: disks Apr 24 00:16:08.987023 ignition[836]: no configs at "/usr/lib/ignition/base.d" Apr 24 00:16:08.987033 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 00:16:08.989374 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 24 00:16:08.987621 ignition[836]: disks: disks passed Apr 24 00:16:08.991445 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 24 00:16:08.987662 ignition[836]: Ignition finished successfully Apr 24 00:16:08.992517 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 00:16:08.993952 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 00:16:08.995516 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 00:16:08.996883 systemd[1]: Reached target basic.target - Basic System. Apr 24 00:16:08.999335 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 24 00:16:09.029143 systemd-fsck[844]: ROOT: clean, 15/553520 files, 52789/553472 blocks Apr 24 00:16:09.031649 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 24 00:16:09.034567 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 24 00:16:09.140723 kernel: EXT4-fs (sda9): mounted filesystem 8c3ace63-1728-4b5e-a7b6-4ef650e41ba1 r/w with ordered data mode. Quota mode: none. Apr 24 00:16:09.141812 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 24 00:16:09.143049 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 24 00:16:09.145380 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 00:16:09.147631 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 24 00:16:09.150164 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 24 00:16:09.150217 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 24 00:16:09.150242 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 00:16:09.158603 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 24 00:16:09.161269 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 24 00:16:09.165454 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (852) Apr 24 00:16:09.169288 kernel: BTRFS info (device sda6): first mount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 00:16:09.169320 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 00:16:09.178721 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 24 00:16:09.178751 kernel: BTRFS info (device sda6): turning on async discard Apr 24 00:16:09.182611 kernel: BTRFS info (device sda6): enabling free space tree Apr 24 00:16:09.184131 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 00:16:09.224038 initrd-setup-root[876]: cut: /sysroot/etc/passwd: No such file or directory Apr 24 00:16:09.229725 initrd-setup-root[883]: cut: /sysroot/etc/group: No such file or directory Apr 24 00:16:09.234940 initrd-setup-root[890]: cut: /sysroot/etc/shadow: No such file or directory Apr 24 00:16:09.239467 initrd-setup-root[897]: cut: /sysroot/etc/gshadow: No such file or directory Apr 24 00:16:09.333968 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 24 00:16:09.336329 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 24 00:16:09.337758 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 24 00:16:09.353000 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 24 00:16:09.358787 kernel: BTRFS info (device sda6): last unmount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 00:16:09.372746 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 24 00:16:09.387967 ignition[966]: INFO : Ignition 2.22.0 Apr 24 00:16:09.388897 ignition[966]: INFO : Stage: mount Apr 24 00:16:09.388897 ignition[966]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 00:16:09.388897 ignition[966]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 00:16:09.392067 ignition[966]: INFO : mount: mount passed Apr 24 00:16:09.392067 ignition[966]: INFO : Ignition finished successfully Apr 24 00:16:09.392241 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 24 00:16:09.394358 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 24 00:16:10.143272 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 00:16:10.165731 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (977) Apr 24 00:16:10.169990 kernel: BTRFS info (device sda6): first mount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 00:16:10.170029 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 00:16:10.177027 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 24 00:16:10.177053 kernel: BTRFS info (device sda6): turning on async discard Apr 24 00:16:10.181140 kernel: BTRFS info (device sda6): enabling free space tree Apr 24 00:16:10.183683 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 00:16:10.220347 ignition[993]: INFO : Ignition 2.22.0 Apr 24 00:16:10.220347 ignition[993]: INFO : Stage: files Apr 24 00:16:10.222413 ignition[993]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 00:16:10.222413 ignition[993]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 00:16:10.222413 ignition[993]: DEBUG : files: compiled without relabeling support, skipping Apr 24 00:16:10.222413 ignition[993]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 24 00:16:10.222413 ignition[993]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 24 00:16:10.228067 ignition[993]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 24 00:16:10.228067 ignition[993]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 24 00:16:10.228067 ignition[993]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 24 00:16:10.225824 unknown[993]: wrote ssh authorized keys file for user: core Apr 24 00:16:10.232091 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 00:16:10.232091 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 24 00:16:10.514584 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 24 00:16:10.590024 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 00:16:10.590024 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 24 00:16:10.593070 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 24 00:16:10.775353 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 24 00:16:10.848305 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 24 00:16:10.849849 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 24 00:16:10.849849 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 24 00:16:10.849849 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 24 00:16:10.849849 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 24 00:16:10.849849 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 00:16:10.849849 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 00:16:10.849849 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 00:16:10.849849 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 00:16:10.849849 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 00:16:10.849849 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 00:16:10.849849 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 24 00:16:10.849849 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 24 00:16:10.849849 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 24 00:16:10.894936 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 24 00:16:11.308389 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 24 00:16:11.931881 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 24 00:16:11.931881 ignition[993]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 24 00:16:11.934575 ignition[993]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 00:16:11.934575 ignition[993]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 00:16:11.934575 ignition[993]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 24 00:16:11.934575 ignition[993]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 24 00:16:11.934575 ignition[993]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 24 00:16:11.934575 ignition[993]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 24 00:16:11.934575 ignition[993]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 24 00:16:11.934575 ignition[993]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 24 00:16:11.934575 ignition[993]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 24 00:16:11.934575 ignition[993]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 24 00:16:11.934575 ignition[993]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 24 00:16:11.934575 ignition[993]: INFO : files: files passed Apr 24 00:16:11.934575 ignition[993]: INFO : Ignition finished successfully Apr 24 00:16:11.937955 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 24 00:16:11.941863 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 24 00:16:11.946373 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 24 00:16:11.957518 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 24 00:16:11.961803 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 24 00:16:11.972998 initrd-setup-root-after-ignition[1023]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 00:16:11.974807 initrd-setup-root-after-ignition[1027]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 00:16:11.976089 initrd-setup-root-after-ignition[1023]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 24 00:16:11.975969 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 00:16:11.977081 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 24 00:16:11.979520 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 24 00:16:12.035048 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 24 00:16:12.035210 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 24 00:16:12.036676 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 24 00:16:12.037834 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 24 00:16:12.039498 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 24 00:16:12.040329 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 24 00:16:12.059781 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 00:16:12.062019 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 24 00:16:12.084417 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 24 00:16:12.085785 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 00:16:12.087381 systemd[1]: Stopped target timers.target - Timer Units. Apr 24 00:16:12.088925 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 24 00:16:12.089069 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 00:16:12.090708 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 24 00:16:12.091660 systemd[1]: Stopped target basic.target - Basic System. Apr 24 00:16:12.093244 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 24 00:16:12.094666 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 00:16:12.096077 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 24 00:16:12.097842 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 24 00:16:12.099443 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 24 00:16:12.100953 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 00:16:12.102602 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 24 00:16:12.104144 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 24 00:16:12.105659 systemd[1]: Stopped target swap.target - Swaps. Apr 24 00:16:12.107153 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 24 00:16:12.107299 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 24 00:16:12.108984 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 24 00:16:12.109992 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 00:16:12.111422 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 24 00:16:12.111524 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 00:16:12.113010 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 24 00:16:12.113154 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 24 00:16:12.115133 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 24 00:16:12.115245 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 00:16:12.116239 systemd[1]: ignition-files.service: Deactivated successfully. Apr 24 00:16:12.116373 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 24 00:16:12.119786 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 24 00:16:12.121862 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 24 00:16:12.123751 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 24 00:16:12.123905 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 00:16:12.126224 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 24 00:16:12.126411 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 00:16:12.134657 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 24 00:16:12.134791 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 24 00:16:12.165224 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 24 00:16:12.167749 ignition[1047]: INFO : Ignition 2.22.0 Apr 24 00:16:12.167749 ignition[1047]: INFO : Stage: umount Apr 24 00:16:12.167749 ignition[1047]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 00:16:12.167749 ignition[1047]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 00:16:12.173645 ignition[1047]: INFO : umount: umount passed Apr 24 00:16:12.173645 ignition[1047]: INFO : Ignition finished successfully Apr 24 00:16:12.170108 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 24 00:16:12.170216 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 24 00:16:12.174227 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 24 00:16:12.174597 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 24 00:16:12.176080 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 24 00:16:12.176133 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 24 00:16:12.177305 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 24 00:16:12.177357 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 24 00:16:12.178678 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 24 00:16:12.178793 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 24 00:16:12.180090 systemd[1]: Stopped target network.target - Network. Apr 24 00:16:12.181423 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 24 00:16:12.181476 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 00:16:12.182889 systemd[1]: Stopped target paths.target - Path Units. Apr 24 00:16:12.184262 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 24 00:16:12.189744 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 00:16:12.191212 systemd[1]: Stopped target slices.target - Slice Units. Apr 24 00:16:12.192757 systemd[1]: Stopped target sockets.target - Socket Units. Apr 24 00:16:12.194415 systemd[1]: iscsid.socket: Deactivated successfully. Apr 24 00:16:12.194474 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 00:16:12.195832 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 24 00:16:12.195875 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 00:16:12.197183 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 24 00:16:12.197236 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 24 00:16:12.198572 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 24 00:16:12.198641 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 24 00:16:12.199974 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 24 00:16:12.200025 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 24 00:16:12.201429 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 24 00:16:12.202980 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 24 00:16:12.209344 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 24 00:16:12.209494 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 24 00:16:12.214982 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 24 00:16:12.215273 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 24 00:16:12.215399 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 24 00:16:12.218150 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 24 00:16:12.218805 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 24 00:16:12.220247 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 24 00:16:12.220295 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 24 00:16:12.222640 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 24 00:16:12.224799 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 24 00:16:12.224856 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 00:16:12.225625 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 00:16:12.225681 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 00:16:12.227960 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 24 00:16:12.228015 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 24 00:16:12.229446 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 24 00:16:12.229498 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 00:16:12.231513 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 00:16:12.236102 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 24 00:16:12.236171 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 24 00:16:12.257587 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 24 00:16:12.257838 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 00:16:12.260079 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 24 00:16:12.260192 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 24 00:16:12.262019 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 24 00:16:12.262077 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 00:16:12.263646 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 24 00:16:12.263750 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 24 00:16:12.265984 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 24 00:16:12.266049 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 24 00:16:12.267645 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 00:16:12.267773 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 00:16:12.270324 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 24 00:16:12.272147 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 24 00:16:12.272236 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 00:16:12.275867 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 24 00:16:12.275933 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 00:16:12.279016 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 24 00:16:12.279070 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 00:16:12.281369 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 24 00:16:12.281430 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 00:16:12.282374 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 00:16:12.282442 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 00:16:12.288084 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 24 00:16:12.288170 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Apr 24 00:16:12.288260 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 24 00:16:12.288354 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 24 00:16:12.290921 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 24 00:16:12.291085 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 24 00:16:12.295408 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 24 00:16:12.295592 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 24 00:16:12.297081 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 24 00:16:12.299562 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 24 00:16:12.336667 systemd[1]: Switching root. Apr 24 00:16:12.365440 systemd-journald[187]: Journal stopped Apr 24 00:16:13.702319 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Apr 24 00:16:13.702356 kernel: SELinux: policy capability network_peer_controls=1 Apr 24 00:16:13.702370 kernel: SELinux: policy capability open_perms=1 Apr 24 00:16:13.702379 kernel: SELinux: policy capability extended_socket_class=1 Apr 24 00:16:13.702388 kernel: SELinux: policy capability always_check_network=0 Apr 24 00:16:13.702400 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 24 00:16:13.702410 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 24 00:16:13.702420 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 24 00:16:13.702429 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 24 00:16:13.702440 kernel: SELinux: policy capability userspace_initial_context=0 Apr 24 00:16:13.702450 kernel: audit: type=1403 audit(1776989772.573:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 24 00:16:13.702460 systemd[1]: Successfully loaded SELinux policy in 76.973ms. Apr 24 00:16:13.702473 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.757ms. Apr 24 00:16:13.702484 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 24 00:16:13.702495 systemd[1]: Detected virtualization kvm. Apr 24 00:16:13.702505 systemd[1]: Detected architecture x86-64. Apr 24 00:16:13.702517 systemd[1]: Detected first boot. Apr 24 00:16:13.702528 systemd[1]: Initializing machine ID from random generator. Apr 24 00:16:13.702538 zram_generator::config[1090]: No configuration found. Apr 24 00:16:13.702549 kernel: Guest personality initialized and is inactive Apr 24 00:16:13.702559 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 24 00:16:13.702568 kernel: Initialized host personality Apr 24 00:16:13.702577 kernel: NET: Registered PF_VSOCK protocol family Apr 24 00:16:13.702587 systemd[1]: Populated /etc with preset unit settings. Apr 24 00:16:13.702600 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 24 00:16:13.702610 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 24 00:16:13.702620 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 24 00:16:13.702630 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 24 00:16:13.702641 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 24 00:16:13.702651 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 24 00:16:13.702663 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 24 00:16:13.702675 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 24 00:16:13.703511 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 24 00:16:13.703534 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 24 00:16:13.703546 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 24 00:16:13.703556 systemd[1]: Created slice user.slice - User and Session Slice. Apr 24 00:16:13.703567 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 00:16:13.703578 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 00:16:13.703588 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 24 00:16:13.703603 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 24 00:16:13.703616 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 24 00:16:13.703627 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 00:16:13.703638 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 24 00:16:13.703648 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 00:16:13.703659 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 00:16:13.703669 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 24 00:16:13.703682 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 24 00:16:13.703709 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 24 00:16:13.703720 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 24 00:16:13.703731 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 00:16:13.703741 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 00:16:13.703751 systemd[1]: Reached target slices.target - Slice Units. Apr 24 00:16:13.703762 systemd[1]: Reached target swap.target - Swaps. Apr 24 00:16:13.703772 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 24 00:16:13.703783 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 24 00:16:13.703796 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 24 00:16:13.703807 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 00:16:13.703818 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 00:16:13.703828 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 00:16:13.703841 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 24 00:16:13.703851 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 24 00:16:13.703862 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 24 00:16:13.703872 systemd[1]: Mounting media.mount - External Media Directory... Apr 24 00:16:13.703882 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:16:13.703893 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 24 00:16:13.703903 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 24 00:16:13.703914 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 24 00:16:13.703927 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 24 00:16:13.703937 systemd[1]: Reached target machines.target - Containers. Apr 24 00:16:13.703947 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 24 00:16:13.703958 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 00:16:13.703968 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 00:16:13.703979 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 24 00:16:13.703989 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 00:16:13.703999 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 00:16:13.704010 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 00:16:13.704022 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 24 00:16:13.704033 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 00:16:13.704043 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 24 00:16:13.704054 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 24 00:16:13.704064 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 24 00:16:13.704075 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 24 00:16:13.704085 systemd[1]: Stopped systemd-fsck-usr.service. Apr 24 00:16:13.704096 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 24 00:16:13.704109 kernel: fuse: init (API version 7.41) Apr 24 00:16:13.704119 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 00:16:13.704129 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 00:16:13.704140 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 00:16:13.704150 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 24 00:16:13.704160 kernel: ACPI: bus type drm_connector registered Apr 24 00:16:13.704170 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 24 00:16:13.704181 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 00:16:13.704195 systemd[1]: verity-setup.service: Deactivated successfully. Apr 24 00:16:13.704206 systemd[1]: Stopped verity-setup.service. Apr 24 00:16:13.704216 kernel: loop: module loaded Apr 24 00:16:13.704226 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:16:13.704237 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 24 00:16:13.704247 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 24 00:16:13.704258 systemd[1]: Mounted media.mount - External Media Directory. Apr 24 00:16:13.704268 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 24 00:16:13.704278 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 24 00:16:13.704291 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 24 00:16:13.704301 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 24 00:16:13.704311 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 00:16:13.704322 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 24 00:16:13.704332 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 24 00:16:13.704342 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 00:16:13.704380 systemd-journald[1181]: Collecting audit messages is disabled. Apr 24 00:16:13.704405 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 00:16:13.704416 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 00:16:13.704426 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 00:16:13.704437 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 00:16:13.704447 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 00:16:13.704461 systemd-journald[1181]: Journal started Apr 24 00:16:13.704481 systemd-journald[1181]: Runtime Journal (/run/log/journal/b58017f866cf4742a04644272f1a4415) is 8M, max 78.2M, 70.2M free. Apr 24 00:16:13.211564 systemd[1]: Queued start job for default target multi-user.target. Apr 24 00:16:13.707708 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 00:16:13.223518 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 24 00:16:13.224118 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 24 00:16:13.709361 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 24 00:16:13.709584 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 24 00:16:13.710593 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 00:16:13.710863 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 00:16:13.713285 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 00:16:13.714337 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 00:16:13.715746 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 24 00:16:13.717006 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 24 00:16:13.730662 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 00:16:13.735793 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 24 00:16:13.739859 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 24 00:16:13.741452 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 24 00:16:13.741554 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 00:16:13.743200 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 24 00:16:13.749830 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 24 00:16:13.750752 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 00:16:13.753854 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 24 00:16:13.758034 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 24 00:16:13.759227 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 00:16:13.760801 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 24 00:16:13.762775 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 00:16:13.765459 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 00:16:13.769816 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 24 00:16:13.774676 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 00:16:13.778659 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 00:16:13.779905 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 24 00:16:13.781843 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 24 00:16:13.791591 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 24 00:16:13.792482 systemd-journald[1181]: Time spent on flushing to /var/log/journal/b58017f866cf4742a04644272f1a4415 is 106.329ms for 1016 entries. Apr 24 00:16:13.792482 systemd-journald[1181]: System Journal (/var/log/journal/b58017f866cf4742a04644272f1a4415) is 8M, max 195.6M, 187.6M free. Apr 24 00:16:13.903556 systemd-journald[1181]: Received client request to flush runtime journal. Apr 24 00:16:13.903620 kernel: loop0: detected capacity change from 0 to 219192 Apr 24 00:16:13.903662 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 24 00:16:13.800948 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 24 00:16:13.810670 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 24 00:16:13.853386 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 00:16:13.890570 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Apr 24 00:16:13.890589 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Apr 24 00:16:13.896624 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 24 00:16:13.902320 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 00:16:13.907910 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 24 00:16:13.915861 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 24 00:16:13.934152 kernel: loop1: detected capacity change from 0 to 110984 Apr 24 00:16:13.968731 kernel: loop2: detected capacity change from 0 to 8 Apr 24 00:16:14.001725 kernel: loop3: detected capacity change from 0 to 128560 Apr 24 00:16:14.000232 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 24 00:16:14.004132 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 00:16:14.038000 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Apr 24 00:16:14.042709 kernel: loop4: detected capacity change from 0 to 219192 Apr 24 00:16:14.041029 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Apr 24 00:16:14.046714 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 00:16:14.075909 kernel: loop5: detected capacity change from 0 to 110984 Apr 24 00:16:14.095724 kernel: loop6: detected capacity change from 0 to 8 Apr 24 00:16:14.102877 kernel: loop7: detected capacity change from 0 to 128560 Apr 24 00:16:14.121600 (sd-merge)[1244]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Apr 24 00:16:14.123313 (sd-merge)[1244]: Merged extensions into '/usr'. Apr 24 00:16:14.128856 systemd[1]: Reload requested from client PID 1215 ('systemd-sysext') (unit systemd-sysext.service)... Apr 24 00:16:14.128949 systemd[1]: Reloading... Apr 24 00:16:14.262803 zram_generator::config[1271]: No configuration found. Apr 24 00:16:14.356898 ldconfig[1210]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 24 00:16:14.518222 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 24 00:16:14.519049 systemd[1]: Reloading finished in 387 ms. Apr 24 00:16:14.536331 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 24 00:16:14.537503 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 24 00:16:14.538571 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 24 00:16:14.552037 systemd[1]: Starting ensure-sysext.service... Apr 24 00:16:14.556816 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 00:16:14.561606 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 00:16:14.580792 systemd[1]: Reload requested from client PID 1315 ('systemctl') (unit ensure-sysext.service)... Apr 24 00:16:14.580812 systemd[1]: Reloading... Apr 24 00:16:14.582786 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 24 00:16:14.583067 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 24 00:16:14.583420 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 24 00:16:14.583815 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 24 00:16:14.584794 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 24 00:16:14.585119 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Apr 24 00:16:14.585244 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Apr 24 00:16:14.590156 systemd-tmpfiles[1316]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 00:16:14.590232 systemd-tmpfiles[1316]: Skipping /boot Apr 24 00:16:14.601594 systemd-tmpfiles[1316]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 00:16:14.601670 systemd-tmpfiles[1316]: Skipping /boot Apr 24 00:16:14.631499 systemd-udevd[1317]: Using default interface naming scheme 'v255'. Apr 24 00:16:14.703719 zram_generator::config[1352]: No configuration found. Apr 24 00:16:14.924727 kernel: mousedev: PS/2 mouse device common for all mice Apr 24 00:16:14.961721 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 24 00:16:14.984732 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 24 00:16:14.989713 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 24 00:16:15.023048 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 24 00:16:15.023434 systemd[1]: Reloading finished in 442 ms. Apr 24 00:16:15.031826 kernel: ACPI: button: Power Button [PWRF] Apr 24 00:16:15.031927 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 00:16:15.034279 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 00:16:15.061576 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 24 00:16:15.065875 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 24 00:16:15.068419 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 24 00:16:15.074599 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 00:16:15.078195 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 00:16:15.085263 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 24 00:16:15.095231 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:16:15.095432 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 00:16:15.097933 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 00:16:15.106027 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 00:16:15.109614 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 00:16:15.110867 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 00:16:15.111181 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 24 00:16:15.114638 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 24 00:16:15.119847 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:16:15.130353 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 00:16:15.130565 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 00:16:15.133245 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 00:16:15.133451 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 00:16:15.136703 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 00:16:15.138254 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 00:16:15.155922 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:16:15.156198 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 00:16:15.160775 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 00:16:15.165944 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 00:16:15.172161 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 00:16:15.176029 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 00:16:15.177710 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 00:16:15.177815 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 24 00:16:15.177931 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:16:15.179195 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 24 00:16:15.181349 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 24 00:16:15.197189 systemd[1]: Finished ensure-sysext.service. Apr 24 00:16:15.218925 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 24 00:16:15.223821 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 24 00:16:15.256185 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 00:16:15.256755 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 00:16:15.258060 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 00:16:15.258516 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 00:16:15.260374 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 00:16:15.260752 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 00:16:15.262394 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 00:16:15.264262 augenrules[1480]: No rules Apr 24 00:16:15.265921 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 00:16:15.267174 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 00:16:15.267492 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 24 00:16:15.268650 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 24 00:16:15.278623 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 24 00:16:15.288661 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 00:16:15.288788 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 00:16:15.288822 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 24 00:16:15.300328 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 24 00:16:15.307997 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 24 00:16:15.323753 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 00:16:15.338847 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 24 00:16:15.348583 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 24 00:16:15.372740 kernel: EDAC MC: Ver: 3.0.0 Apr 24 00:16:15.552455 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 00:16:15.565199 systemd-networkd[1439]: lo: Link UP Apr 24 00:16:15.565734 systemd-networkd[1439]: lo: Gained carrier Apr 24 00:16:15.567494 systemd-networkd[1439]: Enumeration completed Apr 24 00:16:15.567587 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 00:16:15.569951 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 00:16:15.569963 systemd-networkd[1439]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 00:16:15.570868 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 24 00:16:15.572063 systemd-networkd[1439]: eth0: Link UP Apr 24 00:16:15.572256 systemd-networkd[1439]: eth0: Gained carrier Apr 24 00:16:15.572269 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 00:16:15.573891 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 24 00:16:15.575847 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 24 00:16:15.577828 systemd[1]: Reached target time-set.target - System Time Set. Apr 24 00:16:15.582024 systemd-resolved[1440]: Positive Trust Anchors: Apr 24 00:16:15.582359 systemd-resolved[1440]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 00:16:15.582456 systemd-resolved[1440]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 00:16:15.589869 systemd-resolved[1440]: Defaulting to hostname 'linux'. Apr 24 00:16:15.591638 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 00:16:15.592924 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 24 00:16:15.594239 systemd[1]: Reached target network.target - Network. Apr 24 00:16:15.594957 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 00:16:15.595722 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 00:16:15.596539 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 24 00:16:15.597339 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 24 00:16:15.598115 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 24 00:16:15.599185 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 24 00:16:15.600012 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 24 00:16:15.600775 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 24 00:16:15.601522 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 24 00:16:15.601558 systemd[1]: Reached target paths.target - Path Units. Apr 24 00:16:15.602241 systemd[1]: Reached target timers.target - Timer Units. Apr 24 00:16:15.604349 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 24 00:16:15.606363 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 24 00:16:15.609085 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 24 00:16:15.609984 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 24 00:16:15.610746 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 24 00:16:15.613340 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 24 00:16:15.614368 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 24 00:16:15.615776 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 24 00:16:15.617194 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 00:16:15.617894 systemd[1]: Reached target basic.target - Basic System. Apr 24 00:16:15.618635 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 24 00:16:15.618672 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 24 00:16:15.619616 systemd[1]: Starting containerd.service - containerd container runtime... Apr 24 00:16:15.623803 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 24 00:16:15.631419 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 24 00:16:15.634463 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 24 00:16:15.648448 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 24 00:16:15.658852 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 24 00:16:15.660761 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 24 00:16:15.661870 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 24 00:16:15.665867 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 24 00:16:15.669880 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 24 00:16:15.672282 jq[1521]: false Apr 24 00:16:15.677571 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 24 00:16:15.681875 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 24 00:16:15.682015 oslogin_cache_refresh[1525]: Refreshing passwd entry cache Apr 24 00:16:15.686019 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Refreshing passwd entry cache Apr 24 00:16:15.686019 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Failure getting users, quitting Apr 24 00:16:15.686019 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 24 00:16:15.686019 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Refreshing group entry cache Apr 24 00:16:15.685862 oslogin_cache_refresh[1525]: Failure getting users, quitting Apr 24 00:16:15.685878 oslogin_cache_refresh[1525]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 24 00:16:15.686438 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Failure getting groups, quitting Apr 24 00:16:15.686438 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 24 00:16:15.685929 oslogin_cache_refresh[1525]: Refreshing group entry cache Apr 24 00:16:15.686374 oslogin_cache_refresh[1525]: Failure getting groups, quitting Apr 24 00:16:15.686385 oslogin_cache_refresh[1525]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 24 00:16:15.688444 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 24 00:16:15.691936 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 24 00:16:15.692442 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 24 00:16:15.693717 systemd[1]: Starting update-engine.service - Update Engine... Apr 24 00:16:15.703678 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 24 00:16:15.718167 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 24 00:16:15.721138 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 24 00:16:15.721427 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 24 00:16:15.721980 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 24 00:16:15.722258 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 24 00:16:15.738333 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 24 00:16:15.740970 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 24 00:16:15.747176 coreos-metadata[1518]: Apr 24 00:16:15.746 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Apr 24 00:16:15.760556 jq[1536]: true Apr 24 00:16:15.761026 extend-filesystems[1524]: Found /dev/sda6 Apr 24 00:16:15.768527 systemd[1]: motdgen.service: Deactivated successfully. Apr 24 00:16:15.768799 (ntainerd)[1553]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 24 00:16:15.772916 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 24 00:16:15.778709 extend-filesystems[1524]: Found /dev/sda9 Apr 24 00:16:15.781776 update_engine[1533]: I20260424 00:16:15.779834 1533 main.cc:92] Flatcar Update Engine starting Apr 24 00:16:15.787504 extend-filesystems[1524]: Checking size of /dev/sda9 Apr 24 00:16:15.811973 extend-filesystems[1524]: Resized partition /dev/sda9 Apr 24 00:16:15.815810 tar[1538]: linux-amd64/LICENSE Apr 24 00:16:15.815810 tar[1538]: linux-amd64/helm Apr 24 00:16:15.823569 extend-filesystems[1568]: resize2fs 1.47.3 (8-Jul-2025) Apr 24 00:16:15.822207 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 24 00:16:15.822014 dbus-daemon[1519]: [system] SELinux support is enabled Apr 24 00:16:15.828656 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 24 00:16:15.830792 jq[1559]: true Apr 24 00:16:15.828941 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 24 00:16:15.830788 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 24 00:16:15.830813 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 24 00:16:15.838846 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Apr 24 00:16:15.858104 systemd[1]: Started update-engine.service - Update Engine. Apr 24 00:16:15.861530 update_engine[1533]: I20260424 00:16:15.861479 1533 update_check_scheduler.cc:74] Next update check in 11m38s Apr 24 00:16:15.866072 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 24 00:16:15.887968 systemd-logind[1532]: Watching system buttons on /dev/input/event2 (Power Button) Apr 24 00:16:15.891091 systemd-logind[1532]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 24 00:16:15.891499 systemd-logind[1532]: New seat seat0. Apr 24 00:16:15.894546 systemd[1]: Started systemd-logind.service - User Login Management. Apr 24 00:16:15.957561 bash[1591]: Updated "/home/core/.ssh/authorized_keys" Apr 24 00:16:15.958821 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 24 00:16:15.966102 systemd[1]: Starting sshkeys.service... Apr 24 00:16:16.015820 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 24 00:16:16.018339 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 24 00:16:16.092824 containerd[1553]: time="2026-04-24T00:16:16Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 24 00:16:16.094018 containerd[1553]: time="2026-04-24T00:16:16.093378271Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 24 00:16:16.136717 containerd[1553]: time="2026-04-24T00:16:16.135904093Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.71µs" Apr 24 00:16:16.136717 containerd[1553]: time="2026-04-24T00:16:16.135955703Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 24 00:16:16.136717 containerd[1553]: time="2026-04-24T00:16:16.135983363Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 24 00:16:16.136717 containerd[1553]: time="2026-04-24T00:16:16.136196614Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 24 00:16:16.136717 containerd[1553]: time="2026-04-24T00:16:16.136216624Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 24 00:16:16.136717 containerd[1553]: time="2026-04-24T00:16:16.136248804Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 24 00:16:16.136717 containerd[1553]: time="2026-04-24T00:16:16.136325144Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 24 00:16:16.136717 containerd[1553]: time="2026-04-24T00:16:16.136339904Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 24 00:16:16.142527 containerd[1553]: time="2026-04-24T00:16:16.139721607Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 24 00:16:16.142527 containerd[1553]: time="2026-04-24T00:16:16.139754017Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 24 00:16:16.142527 containerd[1553]: time="2026-04-24T00:16:16.139772467Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 24 00:16:16.142527 containerd[1553]: time="2026-04-24T00:16:16.139784657Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 24 00:16:16.142527 containerd[1553]: time="2026-04-24T00:16:16.139924537Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 24 00:16:16.142527 containerd[1553]: time="2026-04-24T00:16:16.140232178Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 24 00:16:16.142527 containerd[1553]: time="2026-04-24T00:16:16.140277188Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 24 00:16:16.142527 containerd[1553]: time="2026-04-24T00:16:16.140292528Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 24 00:16:16.145219 containerd[1553]: time="2026-04-24T00:16:16.144747482Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 24 00:16:16.145414 containerd[1553]: time="2026-04-24T00:16:16.145388413Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 24 00:16:16.145578 containerd[1553]: time="2026-04-24T00:16:16.145557223Z" level=info msg="metadata content store policy set" policy=shared Apr 24 00:16:16.179717 containerd[1553]: time="2026-04-24T00:16:16.176739384Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 24 00:16:16.179717 containerd[1553]: time="2026-04-24T00:16:16.176817164Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 24 00:16:16.179717 containerd[1553]: time="2026-04-24T00:16:16.176837174Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 24 00:16:16.179717 containerd[1553]: time="2026-04-24T00:16:16.176853324Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 24 00:16:16.179717 containerd[1553]: time="2026-04-24T00:16:16.176870484Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 24 00:16:16.179717 containerd[1553]: time="2026-04-24T00:16:16.176885214Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 24 00:16:16.179717 containerd[1553]: time="2026-04-24T00:16:16.176921174Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 24 00:16:16.179717 containerd[1553]: time="2026-04-24T00:16:16.176937664Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 24 00:16:16.179717 containerd[1553]: time="2026-04-24T00:16:16.176952944Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 24 00:16:16.179717 containerd[1553]: time="2026-04-24T00:16:16.176966804Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 24 00:16:16.179717 containerd[1553]: time="2026-04-24T00:16:16.176979694Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 24 00:16:16.179717 containerd[1553]: time="2026-04-24T00:16:16.177002704Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 24 00:16:16.179717 containerd[1553]: time="2026-04-24T00:16:16.177160264Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 24 00:16:16.179717 containerd[1553]: time="2026-04-24T00:16:16.177185944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 24 00:16:16.180201 containerd[1553]: time="2026-04-24T00:16:16.177211645Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 24 00:16:16.180201 containerd[1553]: time="2026-04-24T00:16:16.177227655Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 24 00:16:16.180201 containerd[1553]: time="2026-04-24T00:16:16.177244425Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 24 00:16:16.180201 containerd[1553]: time="2026-04-24T00:16:16.177259365Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 24 00:16:16.180201 containerd[1553]: time="2026-04-24T00:16:16.177274835Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 24 00:16:16.180201 containerd[1553]: time="2026-04-24T00:16:16.177289955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 24 00:16:16.180201 containerd[1553]: time="2026-04-24T00:16:16.177306975Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 24 00:16:16.180201 containerd[1553]: time="2026-04-24T00:16:16.177322015Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 24 00:16:16.180201 containerd[1553]: time="2026-04-24T00:16:16.177337675Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 24 00:16:16.180201 containerd[1553]: time="2026-04-24T00:16:16.177402005Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 24 00:16:16.180201 containerd[1553]: time="2026-04-24T00:16:16.177423405Z" level=info msg="Start snapshots syncer" Apr 24 00:16:16.180201 containerd[1553]: time="2026-04-24T00:16:16.177466625Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 24 00:16:16.189034 coreos-metadata[1594]: Apr 24 00:16:16.187 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Apr 24 00:16:16.189368 containerd[1553]: time="2026-04-24T00:16:16.187827945Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 24 00:16:16.189368 containerd[1553]: time="2026-04-24T00:16:16.187890735Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 24 00:16:16.189518 containerd[1553]: time="2026-04-24T00:16:16.187951275Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 24 00:16:16.189518 containerd[1553]: time="2026-04-24T00:16:16.188082825Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 24 00:16:16.189518 containerd[1553]: time="2026-04-24T00:16:16.188102615Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 24 00:16:16.189518 containerd[1553]: time="2026-04-24T00:16:16.188112305Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 24 00:16:16.189518 containerd[1553]: time="2026-04-24T00:16:16.188122035Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 24 00:16:16.189518 containerd[1553]: time="2026-04-24T00:16:16.188134405Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 24 00:16:16.189518 containerd[1553]: time="2026-04-24T00:16:16.188143475Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 24 00:16:16.189518 containerd[1553]: time="2026-04-24T00:16:16.188159185Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 24 00:16:16.189518 containerd[1553]: time="2026-04-24T00:16:16.188184525Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 24 00:16:16.189518 containerd[1553]: time="2026-04-24T00:16:16.188211376Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 24 00:16:16.189518 containerd[1553]: time="2026-04-24T00:16:16.188222376Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 24 00:16:16.189518 containerd[1553]: time="2026-04-24T00:16:16.188248026Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 24 00:16:16.189518 containerd[1553]: time="2026-04-24T00:16:16.188260566Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 24 00:16:16.189518 containerd[1553]: time="2026-04-24T00:16:16.188268326Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 24 00:16:16.190158 containerd[1553]: time="2026-04-24T00:16:16.188276696Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 24 00:16:16.190158 containerd[1553]: time="2026-04-24T00:16:16.188283326Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 24 00:16:16.190158 containerd[1553]: time="2026-04-24T00:16:16.188291386Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 24 00:16:16.190158 containerd[1553]: time="2026-04-24T00:16:16.188306246Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 24 00:16:16.190158 containerd[1553]: time="2026-04-24T00:16:16.188321966Z" level=info msg="runtime interface created" Apr 24 00:16:16.190158 containerd[1553]: time="2026-04-24T00:16:16.188327476Z" level=info msg="created NRI interface" Apr 24 00:16:16.190158 containerd[1553]: time="2026-04-24T00:16:16.188339306Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 24 00:16:16.190158 containerd[1553]: time="2026-04-24T00:16:16.188351486Z" level=info msg="Connect containerd service" Apr 24 00:16:16.190158 containerd[1553]: time="2026-04-24T00:16:16.188366816Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 24 00:16:16.191478 containerd[1553]: time="2026-04-24T00:16:16.191385589Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 00:16:16.211719 sshd_keygen[1556]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 24 00:16:16.217718 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Apr 24 00:16:16.241760 locksmithd[1572]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 24 00:16:16.243498 extend-filesystems[1568]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 24 00:16:16.243498 extend-filesystems[1568]: old_desc_blocks = 1, new_desc_blocks = 10 Apr 24 00:16:16.243498 extend-filesystems[1568]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Apr 24 00:16:16.251765 extend-filesystems[1524]: Resized filesystem in /dev/sda9 Apr 24 00:16:16.244308 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 24 00:16:16.244613 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 24 00:16:16.265537 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 24 00:16:16.269839 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 24 00:16:16.288664 systemd[1]: issuegen.service: Deactivated successfully. Apr 24 00:16:16.288927 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 24 00:16:16.294237 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 24 00:16:16.302771 systemd-networkd[1439]: eth0: DHCPv4 address 172.234.204.89/24, gateway 172.234.204.1 acquired from 23.205.167.221 Apr 24 00:16:16.303606 systemd-timesyncd[1477]: Network configuration changed, trying to establish connection. Apr 24 00:16:16.304828 dbus-daemon[1519]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1439 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 24 00:16:16.309813 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 24 00:16:16.327311 containerd[1553]: time="2026-04-24T00:16:16.327280745Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 24 00:16:16.327466 containerd[1553]: time="2026-04-24T00:16:16.327450545Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 24 00:16:16.327525 containerd[1553]: time="2026-04-24T00:16:16.327512985Z" level=info msg="Start subscribing containerd event" Apr 24 00:16:16.327583 containerd[1553]: time="2026-04-24T00:16:16.327572695Z" level=info msg="Start recovering state" Apr 24 00:16:16.327712 containerd[1553]: time="2026-04-24T00:16:16.327681835Z" level=info msg="Start event monitor" Apr 24 00:16:16.330077 containerd[1553]: time="2026-04-24T00:16:16.328737536Z" level=info msg="Start cni network conf syncer for default" Apr 24 00:16:16.330077 containerd[1553]: time="2026-04-24T00:16:16.328790786Z" level=info msg="Start streaming server" Apr 24 00:16:16.330077 containerd[1553]: time="2026-04-24T00:16:16.328801146Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 24 00:16:16.330077 containerd[1553]: time="2026-04-24T00:16:16.328809656Z" level=info msg="runtime interface starting up..." Apr 24 00:16:16.330077 containerd[1553]: time="2026-04-24T00:16:16.328815486Z" level=info msg="starting plugins..." Apr 24 00:16:16.330077 containerd[1553]: time="2026-04-24T00:16:16.328831746Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 24 00:16:16.330077 containerd[1553]: time="2026-04-24T00:16:16.329910557Z" level=info msg="containerd successfully booted in 0.240607s" Apr 24 00:16:16.329008 systemd[1]: Started containerd.service - containerd container runtime. Apr 24 00:16:16.332756 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 24 00:16:16.337914 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 24 00:16:16.342806 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 24 00:16:16.343765 systemd[1]: Reached target getty.target - Login Prompts. Apr 24 00:16:16.848526 systemd-resolved[1440]: Clock change detected. Flushing caches. Apr 24 00:16:16.849298 systemd-timesyncd[1477]: Contacted time server 198.46.254.130:123 (0.flatcar.pool.ntp.org). Apr 24 00:16:16.849585 systemd-timesyncd[1477]: Initial clock synchronization to Fri 2026-04-24 00:16:16.847984 UTC. Apr 24 00:16:16.886185 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 24 00:16:16.888236 dbus-daemon[1519]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 24 00:16:16.888829 dbus-daemon[1519]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1635 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 24 00:16:16.894129 systemd[1]: Starting polkit.service - Authorization Manager... Apr 24 00:16:16.970063 tar[1538]: linux-amd64/README.md Apr 24 00:16:16.970779 polkitd[1639]: Started polkitd version 126 Apr 24 00:16:16.975239 polkitd[1639]: Loading rules from directory /etc/polkit-1/rules.d Apr 24 00:16:16.975705 polkitd[1639]: Loading rules from directory /run/polkit-1/rules.d Apr 24 00:16:16.975755 polkitd[1639]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Apr 24 00:16:16.975958 polkitd[1639]: Loading rules from directory /usr/local/share/polkit-1/rules.d Apr 24 00:16:16.975984 polkitd[1639]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Apr 24 00:16:16.976019 polkitd[1639]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 24 00:16:16.976706 polkitd[1639]: Finished loading, compiling and executing 2 rules Apr 24 00:16:16.977617 systemd[1]: Started polkit.service - Authorization Manager. Apr 24 00:16:16.977656 dbus-daemon[1519]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 24 00:16:16.978922 polkitd[1639]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 24 00:16:16.989136 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 24 00:16:16.991747 systemd-hostnamed[1635]: Hostname set to <172-234-204-89> (transient) Apr 24 00:16:16.991766 systemd-resolved[1440]: System hostname changed to '172-234-204-89'. Apr 24 00:16:17.240882 coreos-metadata[1518]: Apr 24 00:16:17.240 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Apr 24 00:16:17.333648 coreos-metadata[1518]: Apr 24 00:16:17.333 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Apr 24 00:16:17.518182 coreos-metadata[1518]: Apr 24 00:16:17.518 INFO Fetch successful Apr 24 00:16:17.518323 coreos-metadata[1518]: Apr 24 00:16:17.518 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Apr 24 00:16:17.681468 coreos-metadata[1594]: Apr 24 00:16:17.681 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Apr 24 00:16:17.771867 coreos-metadata[1594]: Apr 24 00:16:17.771 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Apr 24 00:16:17.783451 coreos-metadata[1518]: Apr 24 00:16:17.783 INFO Fetch successful Apr 24 00:16:17.918196 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 24 00:16:17.919697 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 24 00:16:18.006369 coreos-metadata[1594]: Apr 24 00:16:18.006 INFO Fetch successful Apr 24 00:16:18.028797 update-ssh-keys[1675]: Updated "/home/core/.ssh/authorized_keys" Apr 24 00:16:18.030209 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 24 00:16:18.033134 systemd[1]: Finished sshkeys.service. Apr 24 00:16:18.040433 systemd-networkd[1439]: eth0: Gained IPv6LL Apr 24 00:16:18.043487 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 24 00:16:18.044849 systemd[1]: Reached target network-online.target - Network is Online. Apr 24 00:16:18.047665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:16:18.051466 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 24 00:16:18.080883 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 24 00:16:18.944100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:16:18.947627 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 24 00:16:18.948882 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 24 00:16:18.953675 (kubelet)[1695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 00:16:18.956350 systemd[1]: Started sshd@0-172.234.204.89:22-20.229.252.112:57530.service - OpenSSH per-connection server daemon (20.229.252.112:57530). Apr 24 00:16:18.957296 systemd[1]: Startup finished in 3.025s (kernel) + 8.891s (initrd) + 5.974s (userspace) = 17.890s. Apr 24 00:16:19.431033 kubelet[1695]: E0424 00:16:19.430898 1695 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 00:16:19.434503 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 00:16:19.434714 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 00:16:19.435084 systemd[1]: kubelet.service: Consumed 838ms CPU time, 258.7M memory peak. Apr 24 00:16:19.546356 sshd[1697]: Accepted publickey for core from 20.229.252.112 port 57530 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:16:19.547663 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:16:19.554533 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 24 00:16:19.555953 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 24 00:16:19.563614 systemd-logind[1532]: New session 1 of user core. Apr 24 00:16:19.575099 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 24 00:16:19.578409 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 24 00:16:19.589755 (systemd)[1712]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 24 00:16:19.592409 systemd-logind[1532]: New session c1 of user core. Apr 24 00:16:19.713262 systemd[1712]: Queued start job for default target default.target. Apr 24 00:16:19.719484 systemd[1712]: Created slice app.slice - User Application Slice. Apr 24 00:16:19.719508 systemd[1712]: Reached target paths.target - Paths. Apr 24 00:16:19.719550 systemd[1712]: Reached target timers.target - Timers. Apr 24 00:16:19.721050 systemd[1712]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 24 00:16:19.732759 systemd[1712]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 24 00:16:19.733128 systemd[1712]: Reached target sockets.target - Sockets. Apr 24 00:16:19.733349 systemd[1712]: Reached target basic.target - Basic System. Apr 24 00:16:19.733495 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 24 00:16:19.733971 systemd[1712]: Reached target default.target - Main User Target. Apr 24 00:16:19.734309 systemd[1712]: Startup finished in 135ms. Apr 24 00:16:19.744407 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 24 00:16:20.068532 systemd[1]: Started sshd@1-172.234.204.89:22-20.229.252.112:57538.service - OpenSSH per-connection server daemon (20.229.252.112:57538). Apr 24 00:16:20.585242 sshd[1723]: Accepted publickey for core from 20.229.252.112 port 57538 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:16:20.586796 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:16:20.592931 systemd-logind[1532]: New session 2 of user core. Apr 24 00:16:20.602450 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 24 00:16:20.876897 sshd[1726]: Connection closed by 20.229.252.112 port 57538 Apr 24 00:16:20.878443 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Apr 24 00:16:20.881978 systemd-logind[1532]: Session 2 logged out. Waiting for processes to exit. Apr 24 00:16:20.882736 systemd[1]: sshd@1-172.234.204.89:22-20.229.252.112:57538.service: Deactivated successfully. Apr 24 00:16:20.884580 systemd[1]: session-2.scope: Deactivated successfully. Apr 24 00:16:20.885727 systemd-logind[1532]: Removed session 2. Apr 24 00:16:20.993553 systemd[1]: Started sshd@2-172.234.204.89:22-20.229.252.112:57554.service - OpenSSH per-connection server daemon (20.229.252.112:57554). Apr 24 00:16:21.546007 sshd[1732]: Accepted publickey for core from 20.229.252.112 port 57554 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:16:21.547971 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:16:21.552631 systemd-logind[1532]: New session 3 of user core. Apr 24 00:16:21.559493 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 24 00:16:21.852011 sshd[1735]: Connection closed by 20.229.252.112 port 57554 Apr 24 00:16:21.853480 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Apr 24 00:16:21.857912 systemd[1]: sshd@2-172.234.204.89:22-20.229.252.112:57554.service: Deactivated successfully. Apr 24 00:16:21.859738 systemd[1]: session-3.scope: Deactivated successfully. Apr 24 00:16:21.861223 systemd-logind[1532]: Session 3 logged out. Waiting for processes to exit. Apr 24 00:16:21.862829 systemd-logind[1532]: Removed session 3. Apr 24 00:16:21.963607 systemd[1]: Started sshd@3-172.234.204.89:22-20.229.252.112:57570.service - OpenSSH per-connection server daemon (20.229.252.112:57570). Apr 24 00:16:22.508077 sshd[1741]: Accepted publickey for core from 20.229.252.112 port 57570 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:16:22.509624 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:16:22.515303 systemd-logind[1532]: New session 4 of user core. Apr 24 00:16:22.521466 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 24 00:16:22.816502 sshd[1744]: Connection closed by 20.229.252.112 port 57570 Apr 24 00:16:22.818475 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Apr 24 00:16:22.822188 systemd-logind[1532]: Session 4 logged out. Waiting for processes to exit. Apr 24 00:16:22.822999 systemd[1]: sshd@3-172.234.204.89:22-20.229.252.112:57570.service: Deactivated successfully. Apr 24 00:16:22.827938 systemd[1]: session-4.scope: Deactivated successfully. Apr 24 00:16:22.829179 systemd-logind[1532]: Removed session 4. Apr 24 00:16:22.927678 systemd[1]: Started sshd@4-172.234.204.89:22-20.229.252.112:57576.service - OpenSSH per-connection server daemon (20.229.252.112:57576). Apr 24 00:16:23.468321 sshd[1750]: Accepted publickey for core from 20.229.252.112 port 57576 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:16:23.469225 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:16:23.474266 systemd-logind[1532]: New session 5 of user core. Apr 24 00:16:23.481470 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 24 00:16:23.686101 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 24 00:16:23.686586 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 00:16:23.702691 sudo[1754]: pam_unix(sudo:session): session closed for user root Apr 24 00:16:23.803719 sshd[1753]: Connection closed by 20.229.252.112 port 57576 Apr 24 00:16:23.805503 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Apr 24 00:16:23.809497 systemd-logind[1532]: Session 5 logged out. Waiting for processes to exit. Apr 24 00:16:23.810326 systemd[1]: sshd@4-172.234.204.89:22-20.229.252.112:57576.service: Deactivated successfully. Apr 24 00:16:23.812404 systemd[1]: session-5.scope: Deactivated successfully. Apr 24 00:16:23.814106 systemd-logind[1532]: Removed session 5. Apr 24 00:16:23.918174 systemd[1]: Started sshd@5-172.234.204.89:22-20.229.252.112:57590.service - OpenSSH per-connection server daemon (20.229.252.112:57590). Apr 24 00:16:24.462925 sshd[1760]: Accepted publickey for core from 20.229.252.112 port 57590 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:16:24.464598 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:16:24.470087 systemd-logind[1532]: New session 6 of user core. Apr 24 00:16:24.476434 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 24 00:16:24.674052 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 24 00:16:24.674496 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 00:16:24.679359 sudo[1765]: pam_unix(sudo:session): session closed for user root Apr 24 00:16:24.687416 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 24 00:16:24.687850 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 00:16:24.700584 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 24 00:16:24.749182 augenrules[1787]: No rules Apr 24 00:16:24.751029 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 00:16:24.751555 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 24 00:16:24.753258 sudo[1764]: pam_unix(sudo:session): session closed for user root Apr 24 00:16:24.855877 sshd[1763]: Connection closed by 20.229.252.112 port 57590 Apr 24 00:16:24.857526 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Apr 24 00:16:24.862260 systemd[1]: sshd@5-172.234.204.89:22-20.229.252.112:57590.service: Deactivated successfully. Apr 24 00:16:24.865417 systemd[1]: session-6.scope: Deactivated successfully. Apr 24 00:16:24.869178 systemd-logind[1532]: Session 6 logged out. Waiting for processes to exit. Apr 24 00:16:24.870882 systemd-logind[1532]: Removed session 6. Apr 24 00:16:24.962584 systemd[1]: Started sshd@6-172.234.204.89:22-20.229.252.112:57594.service - OpenSSH per-connection server daemon (20.229.252.112:57594). Apr 24 00:16:25.487969 sshd[1796]: Accepted publickey for core from 20.229.252.112 port 57594 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:16:25.489970 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:16:25.496909 systemd-logind[1532]: New session 7 of user core. Apr 24 00:16:25.503453 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 24 00:16:25.686492 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 24 00:16:25.686828 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 00:16:26.015461 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 24 00:16:26.028654 (dockerd)[1818]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 24 00:16:26.272968 dockerd[1818]: time="2026-04-24T00:16:26.272660890Z" level=info msg="Starting up" Apr 24 00:16:26.276006 dockerd[1818]: time="2026-04-24T00:16:26.275973474Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 24 00:16:26.289361 dockerd[1818]: time="2026-04-24T00:16:26.289292057Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 24 00:16:26.330056 dockerd[1818]: time="2026-04-24T00:16:26.330015338Z" level=info msg="Loading containers: start." Apr 24 00:16:26.341310 kernel: Initializing XFRM netlink socket Apr 24 00:16:26.607311 systemd-networkd[1439]: docker0: Link UP Apr 24 00:16:26.610729 dockerd[1818]: time="2026-04-24T00:16:26.610702478Z" level=info msg="Loading containers: done." Apr 24 00:16:26.624433 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4020858661-merged.mount: Deactivated successfully. Apr 24 00:16:26.625613 dockerd[1818]: time="2026-04-24T00:16:26.625564583Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 24 00:16:26.625671 dockerd[1818]: time="2026-04-24T00:16:26.625637653Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 24 00:16:26.625923 dockerd[1818]: time="2026-04-24T00:16:26.625710043Z" level=info msg="Initializing buildkit" Apr 24 00:16:26.644902 dockerd[1818]: time="2026-04-24T00:16:26.644877112Z" level=info msg="Completed buildkit initialization" Apr 24 00:16:26.651094 dockerd[1818]: time="2026-04-24T00:16:26.651068369Z" level=info msg="Daemon has completed initialization" Apr 24 00:16:26.651308 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 24 00:16:26.652103 dockerd[1818]: time="2026-04-24T00:16:26.652072340Z" level=info msg="API listen on /run/docker.sock" Apr 24 00:16:27.358990 containerd[1553]: time="2026-04-24T00:16:27.358952946Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 24 00:16:27.996053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2488777400.mount: Deactivated successfully. Apr 24 00:16:29.051144 containerd[1553]: time="2026-04-24T00:16:29.051098038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:29.052071 containerd[1553]: time="2026-04-24T00:16:29.052046339Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27100520" Apr 24 00:16:29.052357 containerd[1553]: time="2026-04-24T00:16:29.052327679Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:29.054721 containerd[1553]: time="2026-04-24T00:16:29.054680532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:29.055671 containerd[1553]: time="2026-04-24T00:16:29.055497202Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 1.696508776s" Apr 24 00:16:29.055671 containerd[1553]: time="2026-04-24T00:16:29.055525802Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 24 00:16:29.056140 containerd[1553]: time="2026-04-24T00:16:29.056095433Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 24 00:16:29.610468 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 24 00:16:29.612579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:16:29.797012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:16:29.809603 (kubelet)[2094]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 00:16:29.845477 kubelet[2094]: E0424 00:16:29.845421 2094 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 00:16:29.851059 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 00:16:29.851255 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 00:16:29.851714 systemd[1]: kubelet.service: Consumed 192ms CPU time, 110.3M memory peak. Apr 24 00:16:30.339473 containerd[1553]: time="2026-04-24T00:16:30.339416516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:30.340256 containerd[1553]: time="2026-04-24T00:16:30.340227887Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252744" Apr 24 00:16:30.341304 containerd[1553]: time="2026-04-24T00:16:30.341017248Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:30.346301 containerd[1553]: time="2026-04-24T00:16:30.343037830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:30.349463 containerd[1553]: time="2026-04-24T00:16:30.349427086Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 1.293301483s" Apr 24 00:16:30.349550 containerd[1553]: time="2026-04-24T00:16:30.349463186Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 24 00:16:30.350409 containerd[1553]: time="2026-04-24T00:16:30.350333727Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 24 00:16:31.396028 containerd[1553]: time="2026-04-24T00:16:31.395988472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:31.396799 containerd[1553]: time="2026-04-24T00:16:31.396777943Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810897" Apr 24 00:16:31.397209 containerd[1553]: time="2026-04-24T00:16:31.397183944Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:31.399260 containerd[1553]: time="2026-04-24T00:16:31.399225996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:31.400181 containerd[1553]: time="2026-04-24T00:16:31.400049816Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 1.049688499s" Apr 24 00:16:31.400181 containerd[1553]: time="2026-04-24T00:16:31.400074366Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 24 00:16:31.400675 containerd[1553]: time="2026-04-24T00:16:31.400656607Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 24 00:16:32.415228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2209847291.mount: Deactivated successfully. Apr 24 00:16:32.660866 containerd[1553]: time="2026-04-24T00:16:32.660315946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:32.660866 containerd[1553]: time="2026-04-24T00:16:32.660839167Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972960" Apr 24 00:16:32.661444 containerd[1553]: time="2026-04-24T00:16:32.661421778Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:32.662789 containerd[1553]: time="2026-04-24T00:16:32.662770029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:32.663186 containerd[1553]: time="2026-04-24T00:16:32.663154449Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 1.262472502s" Apr 24 00:16:32.663227 containerd[1553]: time="2026-04-24T00:16:32.663186389Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 24 00:16:32.664022 containerd[1553]: time="2026-04-24T00:16:32.663998650Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 24 00:16:33.224108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount351468358.mount: Deactivated successfully. Apr 24 00:16:33.944504 containerd[1553]: time="2026-04-24T00:16:33.944456360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:33.945701 containerd[1553]: time="2026-04-24T00:16:33.945352931Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388013" Apr 24 00:16:33.946305 containerd[1553]: time="2026-04-24T00:16:33.946257772Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:33.948690 containerd[1553]: time="2026-04-24T00:16:33.948660564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:33.950054 containerd[1553]: time="2026-04-24T00:16:33.950022176Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.285993346s" Apr 24 00:16:33.950169 containerd[1553]: time="2026-04-24T00:16:33.950149496Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 24 00:16:33.950718 containerd[1553]: time="2026-04-24T00:16:33.950681666Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 24 00:16:34.433570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount721466907.mount: Deactivated successfully. Apr 24 00:16:34.436673 containerd[1553]: time="2026-04-24T00:16:34.436630442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:34.437217 containerd[1553]: time="2026-04-24T00:16:34.437176473Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321224" Apr 24 00:16:34.438312 containerd[1553]: time="2026-04-24T00:16:34.437684343Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:34.439386 containerd[1553]: time="2026-04-24T00:16:34.439350295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:34.440314 containerd[1553]: time="2026-04-24T00:16:34.439992736Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 489.287099ms" Apr 24 00:16:34.440314 containerd[1553]: time="2026-04-24T00:16:34.440018936Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 24 00:16:34.440839 containerd[1553]: time="2026-04-24T00:16:34.440796976Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 24 00:16:34.959864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount524991152.mount: Deactivated successfully. Apr 24 00:16:35.642634 containerd[1553]: time="2026-04-24T00:16:35.641685007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:35.642634 containerd[1553]: time="2026-04-24T00:16:35.642555118Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874823" Apr 24 00:16:35.642634 containerd[1553]: time="2026-04-24T00:16:35.642594998Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:35.644958 containerd[1553]: time="2026-04-24T00:16:35.644937820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:35.645818 containerd[1553]: time="2026-04-24T00:16:35.645798561Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.204958134s" Apr 24 00:16:35.645880 containerd[1553]: time="2026-04-24T00:16:35.645867201Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 24 00:16:38.416989 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:16:38.417129 systemd[1]: kubelet.service: Consumed 192ms CPU time, 110.3M memory peak. Apr 24 00:16:38.420431 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:16:38.458749 systemd[1]: Reload requested from client PID 2263 ('systemctl') (unit session-7.scope)... Apr 24 00:16:38.458867 systemd[1]: Reloading... Apr 24 00:16:38.610305 zram_generator::config[2313]: No configuration found. Apr 24 00:16:38.805528 systemd[1]: Reloading finished in 346 ms. Apr 24 00:16:38.853786 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 24 00:16:38.853890 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 24 00:16:38.854238 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:16:38.854299 systemd[1]: kubelet.service: Consumed 137ms CPU time, 98.3M memory peak. Apr 24 00:16:38.855763 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:16:39.022393 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:16:39.031701 (kubelet)[2361]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 00:16:39.072737 kubelet[2361]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 00:16:39.072737 kubelet[2361]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 00:16:39.073137 kubelet[2361]: I0424 00:16:39.072704 2361 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 00:16:39.791561 kubelet[2361]: I0424 00:16:39.791512 2361 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 24 00:16:39.791561 kubelet[2361]: I0424 00:16:39.791537 2361 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 00:16:39.793171 kubelet[2361]: I0424 00:16:39.793149 2361 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 24 00:16:39.793171 kubelet[2361]: I0424 00:16:39.793167 2361 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 00:16:39.793404 kubelet[2361]: I0424 00:16:39.793383 2361 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 00:16:39.797809 kubelet[2361]: E0424 00:16:39.797776 2361 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.234.204.89:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.234.204.89:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 00:16:39.798101 kubelet[2361]: I0424 00:16:39.798081 2361 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 00:16:39.803659 kubelet[2361]: I0424 00:16:39.803634 2361 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 24 00:16:39.807842 kubelet[2361]: I0424 00:16:39.807562 2361 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 24 00:16:39.808313 kubelet[2361]: I0424 00:16:39.808258 2361 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 00:16:39.808509 kubelet[2361]: I0424 00:16:39.808318 2361 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-204-89","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 00:16:39.808616 kubelet[2361]: I0424 00:16:39.808510 2361 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 00:16:39.808616 kubelet[2361]: I0424 00:16:39.808519 2361 container_manager_linux.go:306] "Creating device plugin manager" Apr 24 00:16:39.808616 kubelet[2361]: I0424 00:16:39.808603 2361 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 24 00:16:39.810374 kubelet[2361]: I0424 00:16:39.810355 2361 state_mem.go:36] "Initialized new in-memory state store" Apr 24 00:16:39.810518 kubelet[2361]: I0424 00:16:39.810504 2361 kubelet.go:475] "Attempting to sync node with API server" Apr 24 00:16:39.810518 kubelet[2361]: I0424 00:16:39.810519 2361 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 00:16:39.810580 kubelet[2361]: I0424 00:16:39.810536 2361 kubelet.go:387] "Adding apiserver pod source" Apr 24 00:16:39.810580 kubelet[2361]: I0424 00:16:39.810557 2361 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 00:16:39.812943 kubelet[2361]: I0424 00:16:39.812372 2361 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 24 00:16:39.812943 kubelet[2361]: I0424 00:16:39.812737 2361 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 00:16:39.812943 kubelet[2361]: I0424 00:16:39.812758 2361 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 24 00:16:39.812943 kubelet[2361]: W0424 00:16:39.812802 2361 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 24 00:16:39.815753 kubelet[2361]: E0424 00:16:39.815725 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.234.204.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.204.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 00:16:39.815914 kubelet[2361]: E0424 00:16:39.815895 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.234.204.89:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-204-89&limit=500&resourceVersion=0\": dial tcp 172.234.204.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 00:16:39.817088 kubelet[2361]: I0424 00:16:39.817067 2361 server.go:1262] "Started kubelet" Apr 24 00:16:39.818496 kubelet[2361]: I0424 00:16:39.818460 2361 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 00:16:39.822007 kubelet[2361]: E0424 00:16:39.820835 2361 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.234.204.89:6443/api/v1/namespaces/default/events\": dial tcp 172.234.204.89:6443: connect: connection refused" event="&Event{ObjectMeta:{172-234-204-89.18a922cedb964c36 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-234-204-89,UID:172-234-204-89,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-234-204-89,},FirstTimestamp:2026-04-24 00:16:39.817038902 +0000 UTC m=+0.780743442,LastTimestamp:2026-04-24 00:16:39.817038902 +0000 UTC m=+0.780743442,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-234-204-89,}" Apr 24 00:16:39.822096 kubelet[2361]: I0424 00:16:39.822039 2361 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 00:16:39.823371 kubelet[2361]: I0424 00:16:39.823351 2361 server.go:310] "Adding debug handlers to kubelet server" Apr 24 00:16:39.827177 kubelet[2361]: I0424 00:16:39.827145 2361 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 00:16:39.827216 kubelet[2361]: I0424 00:16:39.827192 2361 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 24 00:16:39.827404 kubelet[2361]: I0424 00:16:39.827374 2361 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 00:16:39.827583 kubelet[2361]: I0424 00:16:39.827560 2361 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 00:16:39.828824 kubelet[2361]: I0424 00:16:39.828810 2361 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 24 00:16:39.829023 kubelet[2361]: E0424 00:16:39.829007 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-204-89\" not found" Apr 24 00:16:39.829426 kubelet[2361]: E0424 00:16:39.829384 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.204.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-204-89?timeout=10s\": dial tcp 172.234.204.89:6443: connect: connection refused" interval="200ms" Apr 24 00:16:39.829666 kubelet[2361]: I0424 00:16:39.829642 2361 reconciler.go:29] "Reconciler: start to sync state" Apr 24 00:16:39.829730 kubelet[2361]: I0424 00:16:39.829697 2361 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 24 00:16:39.830026 kubelet[2361]: E0424 00:16:39.829952 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.234.204.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.204.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 00:16:39.833118 kubelet[2361]: I0424 00:16:39.833058 2361 factory.go:223] Registration of the containerd container factory successfully Apr 24 00:16:39.833118 kubelet[2361]: I0424 00:16:39.833120 2361 factory.go:223] Registration of the systemd container factory successfully Apr 24 00:16:39.833202 kubelet[2361]: I0424 00:16:39.833189 2361 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 00:16:39.844680 kubelet[2361]: I0424 00:16:39.844559 2361 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 24 00:16:39.845739 kubelet[2361]: I0424 00:16:39.845724 2361 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 24 00:16:39.845807 kubelet[2361]: I0424 00:16:39.845797 2361 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 24 00:16:39.845874 kubelet[2361]: I0424 00:16:39.845864 2361 kubelet.go:2428] "Starting kubelet main sync loop" Apr 24 00:16:39.845954 kubelet[2361]: E0424 00:16:39.845940 2361 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 00:16:39.853245 kubelet[2361]: E0424 00:16:39.853223 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.234.204.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.204.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 00:16:39.866299 kubelet[2361]: I0424 00:16:39.866227 2361 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 00:16:39.866299 kubelet[2361]: I0424 00:16:39.866247 2361 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 00:16:39.866299 kubelet[2361]: I0424 00:16:39.866291 2361 state_mem.go:36] "Initialized new in-memory state store" Apr 24 00:16:39.867755 kubelet[2361]: I0424 00:16:39.867737 2361 policy_none.go:49] "None policy: Start" Apr 24 00:16:39.867755 kubelet[2361]: I0424 00:16:39.867755 2361 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 24 00:16:39.867755 kubelet[2361]: I0424 00:16:39.867766 2361 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 24 00:16:39.868832 kubelet[2361]: I0424 00:16:39.868789 2361 policy_none.go:47] "Start" Apr 24 00:16:39.872964 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 24 00:16:39.884450 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 24 00:16:39.888076 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 24 00:16:39.896963 kubelet[2361]: E0424 00:16:39.896613 2361 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 00:16:39.897426 kubelet[2361]: I0424 00:16:39.897413 2361 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 00:16:39.897616 kubelet[2361]: I0424 00:16:39.897503 2361 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 00:16:39.898160 kubelet[2361]: I0424 00:16:39.898131 2361 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 00:16:39.899232 kubelet[2361]: E0424 00:16:39.899158 2361 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 00:16:39.899232 kubelet[2361]: E0424 00:16:39.899190 2361 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-234-204-89\" not found" Apr 24 00:16:39.958535 systemd[1]: Created slice kubepods-burstable-pod3688cea22db0f68b827cf9f86019cd82.slice - libcontainer container kubepods-burstable-pod3688cea22db0f68b827cf9f86019cd82.slice. Apr 24 00:16:39.977722 kubelet[2361]: E0424 00:16:39.977694 2361 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-204-89\" not found" node="172-234-204-89" Apr 24 00:16:39.981364 systemd[1]: Created slice kubepods-burstable-pod0de7388d0d6dea149daa66b1d44a9ce5.slice - libcontainer container kubepods-burstable-pod0de7388d0d6dea149daa66b1d44a9ce5.slice. Apr 24 00:16:39.999115 kubelet[2361]: E0424 00:16:39.998964 2361 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-204-89\" not found" node="172-234-204-89" Apr 24 00:16:39.999777 kubelet[2361]: I0424 00:16:39.999763 2361 kubelet_node_status.go:75] "Attempting to register node" node="172-234-204-89" Apr 24 00:16:40.000255 kubelet[2361]: E0424 00:16:40.000237 2361 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.204.89:6443/api/v1/nodes\": dial tcp 172.234.204.89:6443: connect: connection refused" node="172-234-204-89" Apr 24 00:16:40.001752 systemd[1]: Created slice kubepods-burstable-podf84a9f55f2001a746b37eda5afacf938.slice - libcontainer container kubepods-burstable-podf84a9f55f2001a746b37eda5afacf938.slice. Apr 24 00:16:40.003481 kubelet[2361]: E0424 00:16:40.003466 2361 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-204-89\" not found" node="172-234-204-89" Apr 24 00:16:40.029771 kubelet[2361]: E0424 00:16:40.029743 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.204.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-204-89?timeout=10s\": dial tcp 172.234.204.89:6443: connect: connection refused" interval="400ms" Apr 24 00:16:40.130081 kubelet[2361]: I0424 00:16:40.130047 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3688cea22db0f68b827cf9f86019cd82-ca-certs\") pod \"kube-apiserver-172-234-204-89\" (UID: \"3688cea22db0f68b827cf9f86019cd82\") " pod="kube-system/kube-apiserver-172-234-204-89" Apr 24 00:16:40.130436 kubelet[2361]: I0424 00:16:40.130082 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0de7388d0d6dea149daa66b1d44a9ce5-flexvolume-dir\") pod \"kube-controller-manager-172-234-204-89\" (UID: \"0de7388d0d6dea149daa66b1d44a9ce5\") " pod="kube-system/kube-controller-manager-172-234-204-89" Apr 24 00:16:40.130436 kubelet[2361]: I0424 00:16:40.130099 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3688cea22db0f68b827cf9f86019cd82-k8s-certs\") pod \"kube-apiserver-172-234-204-89\" (UID: \"3688cea22db0f68b827cf9f86019cd82\") " pod="kube-system/kube-apiserver-172-234-204-89" Apr 24 00:16:40.130436 kubelet[2361]: I0424 00:16:40.130119 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3688cea22db0f68b827cf9f86019cd82-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-204-89\" (UID: \"3688cea22db0f68b827cf9f86019cd82\") " pod="kube-system/kube-apiserver-172-234-204-89" Apr 24 00:16:40.130436 kubelet[2361]: I0424 00:16:40.130135 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0de7388d0d6dea149daa66b1d44a9ce5-ca-certs\") pod \"kube-controller-manager-172-234-204-89\" (UID: \"0de7388d0d6dea149daa66b1d44a9ce5\") " pod="kube-system/kube-controller-manager-172-234-204-89" Apr 24 00:16:40.130436 kubelet[2361]: I0424 00:16:40.130148 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0de7388d0d6dea149daa66b1d44a9ce5-k8s-certs\") pod \"kube-controller-manager-172-234-204-89\" (UID: \"0de7388d0d6dea149daa66b1d44a9ce5\") " pod="kube-system/kube-controller-manager-172-234-204-89" Apr 24 00:16:40.130556 kubelet[2361]: I0424 00:16:40.130166 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0de7388d0d6dea149daa66b1d44a9ce5-kubeconfig\") pod \"kube-controller-manager-172-234-204-89\" (UID: \"0de7388d0d6dea149daa66b1d44a9ce5\") " pod="kube-system/kube-controller-manager-172-234-204-89" Apr 24 00:16:40.130556 kubelet[2361]: I0424 00:16:40.130180 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0de7388d0d6dea149daa66b1d44a9ce5-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-204-89\" (UID: \"0de7388d0d6dea149daa66b1d44a9ce5\") " pod="kube-system/kube-controller-manager-172-234-204-89" Apr 24 00:16:40.130556 kubelet[2361]: I0424 00:16:40.130196 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f84a9f55f2001a746b37eda5afacf938-kubeconfig\") pod \"kube-scheduler-172-234-204-89\" (UID: \"f84a9f55f2001a746b37eda5afacf938\") " pod="kube-system/kube-scheduler-172-234-204-89" Apr 24 00:16:40.201946 kubelet[2361]: I0424 00:16:40.201914 2361 kubelet_node_status.go:75] "Attempting to register node" node="172-234-204-89" Apr 24 00:16:40.202440 kubelet[2361]: E0424 00:16:40.202415 2361 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.204.89:6443/api/v1/nodes\": dial tcp 172.234.204.89:6443: connect: connection refused" node="172-234-204-89" Apr 24 00:16:40.280445 kubelet[2361]: E0424 00:16:40.280408 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:40.281361 containerd[1553]: time="2026-04-24T00:16:40.281312296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-204-89,Uid:3688cea22db0f68b827cf9f86019cd82,Namespace:kube-system,Attempt:0,}" Apr 24 00:16:40.300622 kubelet[2361]: E0424 00:16:40.300598 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:40.301077 containerd[1553]: time="2026-04-24T00:16:40.301030345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-204-89,Uid:0de7388d0d6dea149daa66b1d44a9ce5,Namespace:kube-system,Attempt:0,}" Apr 24 00:16:40.305405 kubelet[2361]: E0424 00:16:40.305386 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:40.305706 containerd[1553]: time="2026-04-24T00:16:40.305685510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-204-89,Uid:f84a9f55f2001a746b37eda5afacf938,Namespace:kube-system,Attempt:0,}" Apr 24 00:16:40.430782 kubelet[2361]: E0424 00:16:40.430646 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.204.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-204-89?timeout=10s\": dial tcp 172.234.204.89:6443: connect: connection refused" interval="800ms" Apr 24 00:16:40.604774 kubelet[2361]: I0424 00:16:40.604732 2361 kubelet_node_status.go:75] "Attempting to register node" node="172-234-204-89" Apr 24 00:16:40.605016 kubelet[2361]: E0424 00:16:40.604992 2361 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.204.89:6443/api/v1/nodes\": dial tcp 172.234.204.89:6443: connect: connection refused" node="172-234-204-89" Apr 24 00:16:40.709229 kubelet[2361]: E0424 00:16:40.709120 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.234.204.89:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-204-89&limit=500&resourceVersion=0\": dial tcp 172.234.204.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 00:16:40.765736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1677837675.mount: Deactivated successfully. Apr 24 00:16:40.771675 containerd[1553]: time="2026-04-24T00:16:40.771632726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 00:16:40.772787 containerd[1553]: time="2026-04-24T00:16:40.772759037Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 00:16:40.774038 containerd[1553]: time="2026-04-24T00:16:40.774007318Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Apr 24 00:16:40.774557 containerd[1553]: time="2026-04-24T00:16:40.774513779Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 24 00:16:40.777054 containerd[1553]: time="2026-04-24T00:16:40.775792020Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 00:16:40.777054 containerd[1553]: time="2026-04-24T00:16:40.776382351Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 24 00:16:40.779950 containerd[1553]: time="2026-04-24T00:16:40.779560904Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 00:16:40.780863 containerd[1553]: time="2026-04-24T00:16:40.780835805Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 479.053299ms" Apr 24 00:16:40.781969 containerd[1553]: time="2026-04-24T00:16:40.781938226Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 475.381365ms" Apr 24 00:16:40.782469 containerd[1553]: time="2026-04-24T00:16:40.782440537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 00:16:40.783369 containerd[1553]: time="2026-04-24T00:16:40.783343458Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 500.484211ms" Apr 24 00:16:40.786613 kubelet[2361]: E0424 00:16:40.786581 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.234.204.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.204.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 00:16:40.813270 containerd[1553]: time="2026-04-24T00:16:40.813201987Z" level=info msg="connecting to shim a5ca7c6d7fd2825fe820db6d22f665c512340d59f3fb3d31e08a285f42a40ffa" address="unix:///run/containerd/s/5092c6107e1bbb6a26d8359f74fb6179d13a63b76106fec77047da3cc536b838" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:16:40.826662 containerd[1553]: time="2026-04-24T00:16:40.826615211Z" level=info msg="connecting to shim d548dbbb934cb67dee2e5ab93ea1d9c78e75dbb8571b1c86f2263899b47f3e4e" address="unix:///run/containerd/s/386bdfd4ab3c922dfbf1d11ac030d50d3f2cc70f02188606b00acb0f9d0604f4" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:16:40.828434 containerd[1553]: time="2026-04-24T00:16:40.828412943Z" level=info msg="connecting to shim 09b43abef23353e0ff753030b07bd49e5ad697de152c65bc97ae01ad13d7bd84" address="unix:///run/containerd/s/dc33451e4183a4aac86924a97461a8e65e6a326ec0492831feb5a815ae6ad90f" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:16:40.855454 systemd[1]: Started cri-containerd-a5ca7c6d7fd2825fe820db6d22f665c512340d59f3fb3d31e08a285f42a40ffa.scope - libcontainer container a5ca7c6d7fd2825fe820db6d22f665c512340d59f3fb3d31e08a285f42a40ffa. Apr 24 00:16:40.862499 systemd[1]: Started cri-containerd-d548dbbb934cb67dee2e5ab93ea1d9c78e75dbb8571b1c86f2263899b47f3e4e.scope - libcontainer container d548dbbb934cb67dee2e5ab93ea1d9c78e75dbb8571b1c86f2263899b47f3e4e. Apr 24 00:16:40.869646 systemd[1]: Started cri-containerd-09b43abef23353e0ff753030b07bd49e5ad697de152c65bc97ae01ad13d7bd84.scope - libcontainer container 09b43abef23353e0ff753030b07bd49e5ad697de152c65bc97ae01ad13d7bd84. Apr 24 00:16:40.931431 containerd[1553]: time="2026-04-24T00:16:40.931367976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-204-89,Uid:0de7388d0d6dea149daa66b1d44a9ce5,Namespace:kube-system,Attempt:0,} returns sandbox id \"09b43abef23353e0ff753030b07bd49e5ad697de152c65bc97ae01ad13d7bd84\"" Apr 24 00:16:40.932605 kubelet[2361]: E0424 00:16:40.932388 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:40.937647 containerd[1553]: time="2026-04-24T00:16:40.937622512Z" level=info msg="CreateContainer within sandbox \"09b43abef23353e0ff753030b07bd49e5ad697de152c65bc97ae01ad13d7bd84\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 24 00:16:40.938676 containerd[1553]: time="2026-04-24T00:16:40.938627763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-204-89,Uid:3688cea22db0f68b827cf9f86019cd82,Namespace:kube-system,Attempt:0,} returns sandbox id \"d548dbbb934cb67dee2e5ab93ea1d9c78e75dbb8571b1c86f2263899b47f3e4e\"" Apr 24 00:16:40.939749 kubelet[2361]: E0424 00:16:40.939626 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:40.942343 containerd[1553]: time="2026-04-24T00:16:40.942317307Z" level=info msg="CreateContainer within sandbox \"d548dbbb934cb67dee2e5ab93ea1d9c78e75dbb8571b1c86f2263899b47f3e4e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 24 00:16:40.945697 containerd[1553]: time="2026-04-24T00:16:40.945647390Z" level=info msg="Container b70c76701958152ba013f0b639fa5402159a99e9e82fd93bc834a3c3759516bc: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:16:40.954623 kubelet[2361]: E0424 00:16:40.954590 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.234.204.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.204.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 00:16:40.954768 containerd[1553]: time="2026-04-24T00:16:40.954721339Z" level=info msg="Container 52a362fab8edfc06696f7dd6d955fc07b9607e9dcc04af0634987dfe0e204c03: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:16:40.958183 containerd[1553]: time="2026-04-24T00:16:40.957946592Z" level=info msg="CreateContainer within sandbox \"09b43abef23353e0ff753030b07bd49e5ad697de152c65bc97ae01ad13d7bd84\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b70c76701958152ba013f0b639fa5402159a99e9e82fd93bc834a3c3759516bc\"" Apr 24 00:16:40.960262 containerd[1553]: time="2026-04-24T00:16:40.960186564Z" level=info msg="StartContainer for \"b70c76701958152ba013f0b639fa5402159a99e9e82fd93bc834a3c3759516bc\"" Apr 24 00:16:40.962440 containerd[1553]: time="2026-04-24T00:16:40.962412627Z" level=info msg="connecting to shim b70c76701958152ba013f0b639fa5402159a99e9e82fd93bc834a3c3759516bc" address="unix:///run/containerd/s/dc33451e4183a4aac86924a97461a8e65e6a326ec0492831feb5a815ae6ad90f" protocol=ttrpc version=3 Apr 24 00:16:40.964087 containerd[1553]: time="2026-04-24T00:16:40.964048628Z" level=info msg="CreateContainer within sandbox \"d548dbbb934cb67dee2e5ab93ea1d9c78e75dbb8571b1c86f2263899b47f3e4e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"52a362fab8edfc06696f7dd6d955fc07b9607e9dcc04af0634987dfe0e204c03\"" Apr 24 00:16:40.966352 containerd[1553]: time="2026-04-24T00:16:40.965832850Z" level=info msg="StartContainer for \"52a362fab8edfc06696f7dd6d955fc07b9607e9dcc04af0634987dfe0e204c03\"" Apr 24 00:16:40.970180 containerd[1553]: time="2026-04-24T00:16:40.969908964Z" level=info msg="connecting to shim 52a362fab8edfc06696f7dd6d955fc07b9607e9dcc04af0634987dfe0e204c03" address="unix:///run/containerd/s/386bdfd4ab3c922dfbf1d11ac030d50d3f2cc70f02188606b00acb0f9d0604f4" protocol=ttrpc version=3 Apr 24 00:16:40.982623 containerd[1553]: time="2026-04-24T00:16:40.982571097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-204-89,Uid:f84a9f55f2001a746b37eda5afacf938,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5ca7c6d7fd2825fe820db6d22f665c512340d59f3fb3d31e08a285f42a40ffa\"" Apr 24 00:16:40.986465 kubelet[2361]: E0424 00:16:40.986445 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:40.987062 systemd[1]: Started cri-containerd-b70c76701958152ba013f0b639fa5402159a99e9e82fd93bc834a3c3759516bc.scope - libcontainer container b70c76701958152ba013f0b639fa5402159a99e9e82fd93bc834a3c3759516bc. Apr 24 00:16:40.992442 containerd[1553]: time="2026-04-24T00:16:40.992238806Z" level=info msg="CreateContainer within sandbox \"a5ca7c6d7fd2825fe820db6d22f665c512340d59f3fb3d31e08a285f42a40ffa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 24 00:16:40.998579 systemd[1]: Started cri-containerd-52a362fab8edfc06696f7dd6d955fc07b9607e9dcc04af0634987dfe0e204c03.scope - libcontainer container 52a362fab8edfc06696f7dd6d955fc07b9607e9dcc04af0634987dfe0e204c03. Apr 24 00:16:41.004363 containerd[1553]: time="2026-04-24T00:16:41.004340199Z" level=info msg="Container 73e5462ec2bb56130a854231b601823f1e17e1b01c3beb72c8c91e48c7e3bc3e: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:16:41.009241 containerd[1553]: time="2026-04-24T00:16:41.009055093Z" level=info msg="CreateContainer within sandbox \"a5ca7c6d7fd2825fe820db6d22f665c512340d59f3fb3d31e08a285f42a40ffa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"73e5462ec2bb56130a854231b601823f1e17e1b01c3beb72c8c91e48c7e3bc3e\"" Apr 24 00:16:41.010427 containerd[1553]: time="2026-04-24T00:16:41.010246254Z" level=info msg="StartContainer for \"73e5462ec2bb56130a854231b601823f1e17e1b01c3beb72c8c91e48c7e3bc3e\"" Apr 24 00:16:41.012187 containerd[1553]: time="2026-04-24T00:16:41.012158896Z" level=info msg="connecting to shim 73e5462ec2bb56130a854231b601823f1e17e1b01c3beb72c8c91e48c7e3bc3e" address="unix:///run/containerd/s/5092c6107e1bbb6a26d8359f74fb6179d13a63b76106fec77047da3cc536b838" protocol=ttrpc version=3 Apr 24 00:16:41.047483 systemd[1]: Started cri-containerd-73e5462ec2bb56130a854231b601823f1e17e1b01c3beb72c8c91e48c7e3bc3e.scope - libcontainer container 73e5462ec2bb56130a854231b601823f1e17e1b01c3beb72c8c91e48c7e3bc3e. Apr 24 00:16:41.068136 containerd[1553]: time="2026-04-24T00:16:41.068059232Z" level=info msg="StartContainer for \"b70c76701958152ba013f0b639fa5402159a99e9e82fd93bc834a3c3759516bc\" returns successfully" Apr 24 00:16:41.091815 containerd[1553]: time="2026-04-24T00:16:41.091773976Z" level=info msg="StartContainer for \"52a362fab8edfc06696f7dd6d955fc07b9607e9dcc04af0634987dfe0e204c03\" returns successfully" Apr 24 00:16:41.170186 containerd[1553]: time="2026-04-24T00:16:41.169825874Z" level=info msg="StartContainer for \"73e5462ec2bb56130a854231b601823f1e17e1b01c3beb72c8c91e48c7e3bc3e\" returns successfully" Apr 24 00:16:41.407479 kubelet[2361]: I0424 00:16:41.407452 2361 kubelet_node_status.go:75] "Attempting to register node" node="172-234-204-89" Apr 24 00:16:41.880019 kubelet[2361]: E0424 00:16:41.879981 2361 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-204-89\" not found" node="172-234-204-89" Apr 24 00:16:41.880451 kubelet[2361]: E0424 00:16:41.880153 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:41.887767 kubelet[2361]: E0424 00:16:41.887740 2361 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-204-89\" not found" node="172-234-204-89" Apr 24 00:16:41.887865 kubelet[2361]: E0424 00:16:41.887847 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:41.889615 kubelet[2361]: E0424 00:16:41.889600 2361 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-204-89\" not found" node="172-234-204-89" Apr 24 00:16:41.889704 kubelet[2361]: E0424 00:16:41.889686 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:42.339617 kubelet[2361]: E0424 00:16:42.339086 2361 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-234-204-89\" not found" node="172-234-204-89" Apr 24 00:16:42.393427 kubelet[2361]: I0424 00:16:42.393131 2361 kubelet_node_status.go:78] "Successfully registered node" node="172-234-204-89" Apr 24 00:16:42.393427 kubelet[2361]: E0424 00:16:42.393181 2361 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"172-234-204-89\": node \"172-234-204-89\" not found" Apr 24 00:16:42.411521 kubelet[2361]: E0424 00:16:42.411487 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-204-89\" not found" Apr 24 00:16:42.511715 kubelet[2361]: E0424 00:16:42.511671 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-204-89\" not found" Apr 24 00:16:42.612321 kubelet[2361]: E0424 00:16:42.612255 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-204-89\" not found" Apr 24 00:16:42.713054 kubelet[2361]: E0424 00:16:42.713017 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-204-89\" not found" Apr 24 00:16:42.813968 kubelet[2361]: E0424 00:16:42.813914 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-204-89\" not found" Apr 24 00:16:42.892219 kubelet[2361]: E0424 00:16:42.891864 2361 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-204-89\" not found" node="172-234-204-89" Apr 24 00:16:42.892219 kubelet[2361]: E0424 00:16:42.891948 2361 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-204-89\" not found" node="172-234-204-89" Apr 24 00:16:42.892219 kubelet[2361]: E0424 00:16:42.892022 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:42.892219 kubelet[2361]: E0424 00:16:42.892032 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:42.914946 kubelet[2361]: E0424 00:16:42.914913 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-204-89\" not found" Apr 24 00:16:43.015548 kubelet[2361]: E0424 00:16:43.015497 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-204-89\" not found" Apr 24 00:16:43.116438 kubelet[2361]: E0424 00:16:43.115989 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-204-89\" not found" Apr 24 00:16:43.216809 kubelet[2361]: E0424 00:16:43.216687 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-204-89\" not found" Apr 24 00:16:43.317710 kubelet[2361]: E0424 00:16:43.317575 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-204-89\" not found" Apr 24 00:16:43.418195 kubelet[2361]: E0424 00:16:43.418151 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-204-89\" not found" Apr 24 00:16:43.518746 kubelet[2361]: E0424 00:16:43.518640 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-204-89\" not found" Apr 24 00:16:43.619099 kubelet[2361]: E0424 00:16:43.619066 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-204-89\" not found" Apr 24 00:16:43.719546 kubelet[2361]: E0424 00:16:43.719492 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-204-89\" not found" Apr 24 00:16:43.820030 kubelet[2361]: E0424 00:16:43.819915 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-204-89\" not found" Apr 24 00:16:43.929530 kubelet[2361]: I0424 00:16:43.929496 2361 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-204-89" Apr 24 00:16:43.937508 kubelet[2361]: I0424 00:16:43.937381 2361 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-204-89" Apr 24 00:16:43.940800 kubelet[2361]: I0424 00:16:43.940781 2361 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-204-89" Apr 24 00:16:44.360926 kubelet[2361]: I0424 00:16:44.360617 2361 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-204-89" Apr 24 00:16:44.366209 kubelet[2361]: E0424 00:16:44.366158 2361 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-234-204-89\" already exists" pod="kube-system/kube-controller-manager-172-234-204-89" Apr 24 00:16:44.366427 kubelet[2361]: E0424 00:16:44.366401 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:44.480028 systemd[1]: Reload requested from client PID 2647 ('systemctl') (unit session-7.scope)... Apr 24 00:16:44.480047 systemd[1]: Reloading... Apr 24 00:16:44.605320 zram_generator::config[2694]: No configuration found. Apr 24 00:16:44.815378 kubelet[2361]: I0424 00:16:44.814550 2361 apiserver.go:52] "Watching apiserver" Apr 24 00:16:44.818187 kubelet[2361]: E0424 00:16:44.818138 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:44.818243 kubelet[2361]: E0424 00:16:44.818228 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:44.831312 kubelet[2361]: I0424 00:16:44.830690 2361 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 24 00:16:44.837224 systemd[1]: Reloading finished in 356 ms. Apr 24 00:16:44.867864 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:16:44.886989 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 00:16:44.887323 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:16:44.887390 systemd[1]: kubelet.service: Consumed 1.138s CPU time, 124.2M memory peak. Apr 24 00:16:44.890096 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:16:45.085630 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:16:45.095720 (kubelet)[2742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 00:16:45.145108 kubelet[2742]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 00:16:45.145108 kubelet[2742]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 00:16:45.145448 kubelet[2742]: I0424 00:16:45.145135 2742 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 00:16:45.151334 kubelet[2742]: I0424 00:16:45.150837 2742 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 24 00:16:45.151334 kubelet[2742]: I0424 00:16:45.150874 2742 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 00:16:45.151334 kubelet[2742]: I0424 00:16:45.150901 2742 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 24 00:16:45.151334 kubelet[2742]: I0424 00:16:45.150912 2742 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 00:16:45.151334 kubelet[2742]: I0424 00:16:45.151106 2742 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 00:16:45.152655 kubelet[2742]: I0424 00:16:45.152630 2742 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 24 00:16:45.156309 kubelet[2742]: I0424 00:16:45.155523 2742 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 00:16:45.160742 kubelet[2742]: I0424 00:16:45.160719 2742 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 24 00:16:45.164679 kubelet[2742]: I0424 00:16:45.164651 2742 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 24 00:16:45.164881 kubelet[2742]: I0424 00:16:45.164851 2742 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 00:16:45.165014 kubelet[2742]: I0424 00:16:45.164879 2742 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-204-89","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 00:16:45.165087 kubelet[2742]: I0424 00:16:45.165016 2742 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 00:16:45.165087 kubelet[2742]: I0424 00:16:45.165025 2742 container_manager_linux.go:306] "Creating device plugin manager" Apr 24 00:16:45.165087 kubelet[2742]: I0424 00:16:45.165050 2742 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 24 00:16:45.165232 kubelet[2742]: I0424 00:16:45.165218 2742 state_mem.go:36] "Initialized new in-memory state store" Apr 24 00:16:45.166374 kubelet[2742]: I0424 00:16:45.165408 2742 kubelet.go:475] "Attempting to sync node with API server" Apr 24 00:16:45.166374 kubelet[2742]: I0424 00:16:45.165424 2742 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 00:16:45.166374 kubelet[2742]: I0424 00:16:45.165458 2742 kubelet.go:387] "Adding apiserver pod source" Apr 24 00:16:45.166374 kubelet[2742]: I0424 00:16:45.165468 2742 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 00:16:45.172017 kubelet[2742]: I0424 00:16:45.171992 2742 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 24 00:16:45.172680 kubelet[2742]: I0424 00:16:45.172647 2742 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 00:16:45.172777 kubelet[2742]: I0424 00:16:45.172766 2742 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 24 00:16:45.177182 kubelet[2742]: I0424 00:16:45.177082 2742 server.go:1262] "Started kubelet" Apr 24 00:16:45.179105 kubelet[2742]: I0424 00:16:45.179092 2742 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 00:16:45.179734 kubelet[2742]: I0424 00:16:45.179690 2742 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 00:16:45.181692 kubelet[2742]: I0424 00:16:45.181669 2742 server.go:310] "Adding debug handlers to kubelet server" Apr 24 00:16:45.186199 kubelet[2742]: I0424 00:16:45.186176 2742 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 24 00:16:45.187421 kubelet[2742]: I0424 00:16:45.187380 2742 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 00:16:45.187471 kubelet[2742]: I0424 00:16:45.187426 2742 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 24 00:16:45.187614 kubelet[2742]: I0424 00:16:45.187580 2742 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 00:16:45.188611 kubelet[2742]: I0424 00:16:45.188137 2742 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 00:16:45.188693 kubelet[2742]: I0424 00:16:45.188673 2742 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 24 00:16:45.189059 kubelet[2742]: I0424 00:16:45.189047 2742 reconciler.go:29] "Reconciler: start to sync state" Apr 24 00:16:45.194613 kubelet[2742]: I0424 00:16:45.193997 2742 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 00:16:45.199548 kubelet[2742]: I0424 00:16:45.199524 2742 factory.go:223] Registration of the containerd container factory successfully Apr 24 00:16:45.199548 kubelet[2742]: I0424 00:16:45.199543 2742 factory.go:223] Registration of the systemd container factory successfully Apr 24 00:16:45.204326 kubelet[2742]: I0424 00:16:45.204207 2742 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 24 00:16:45.205558 kubelet[2742]: I0424 00:16:45.205529 2742 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 24 00:16:45.205558 kubelet[2742]: I0424 00:16:45.205549 2742 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 24 00:16:45.205618 kubelet[2742]: I0424 00:16:45.205570 2742 kubelet.go:2428] "Starting kubelet main sync loop" Apr 24 00:16:45.205640 kubelet[2742]: E0424 00:16:45.205610 2742 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 00:16:45.248834 kubelet[2742]: I0424 00:16:45.248486 2742 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 00:16:45.248834 kubelet[2742]: I0424 00:16:45.248502 2742 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 00:16:45.248834 kubelet[2742]: I0424 00:16:45.248521 2742 state_mem.go:36] "Initialized new in-memory state store" Apr 24 00:16:45.248834 kubelet[2742]: I0424 00:16:45.248635 2742 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 24 00:16:45.248834 kubelet[2742]: I0424 00:16:45.248644 2742 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 24 00:16:45.248834 kubelet[2742]: I0424 00:16:45.248659 2742 policy_none.go:49] "None policy: Start" Apr 24 00:16:45.248834 kubelet[2742]: I0424 00:16:45.248668 2742 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 24 00:16:45.248834 kubelet[2742]: I0424 00:16:45.248678 2742 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 24 00:16:45.249330 kubelet[2742]: I0424 00:16:45.249318 2742 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 24 00:16:45.249412 kubelet[2742]: I0424 00:16:45.249403 2742 policy_none.go:47] "Start" Apr 24 00:16:45.254907 kubelet[2742]: E0424 00:16:45.254424 2742 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 00:16:45.254907 kubelet[2742]: I0424 00:16:45.254581 2742 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 00:16:45.254907 kubelet[2742]: I0424 00:16:45.254591 2742 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 00:16:45.254907 kubelet[2742]: I0424 00:16:45.254798 2742 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 00:16:45.257967 kubelet[2742]: E0424 00:16:45.257949 2742 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 00:16:45.306817 kubelet[2742]: I0424 00:16:45.306790 2742 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-204-89" Apr 24 00:16:45.307321 kubelet[2742]: I0424 00:16:45.307060 2742 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-204-89" Apr 24 00:16:45.307421 kubelet[2742]: I0424 00:16:45.307158 2742 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-204-89" Apr 24 00:16:45.312489 kubelet[2742]: E0424 00:16:45.312424 2742 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-204-89\" already exists" pod="kube-system/kube-apiserver-172-234-204-89" Apr 24 00:16:45.313066 kubelet[2742]: E0424 00:16:45.312988 2742 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-234-204-89\" already exists" pod="kube-system/kube-controller-manager-172-234-204-89" Apr 24 00:16:45.313216 kubelet[2742]: E0424 00:16:45.313155 2742 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-204-89\" already exists" pod="kube-system/kube-scheduler-172-234-204-89" Apr 24 00:16:45.367796 kubelet[2742]: I0424 00:16:45.367776 2742 kubelet_node_status.go:75] "Attempting to register node" node="172-234-204-89" Apr 24 00:16:45.373701 kubelet[2742]: I0424 00:16:45.373656 2742 kubelet_node_status.go:124] "Node was previously registered" node="172-234-204-89" Apr 24 00:16:45.373784 kubelet[2742]: I0424 00:16:45.373776 2742 kubelet_node_status.go:78] "Successfully registered node" node="172-234-204-89" Apr 24 00:16:45.390235 kubelet[2742]: I0424 00:16:45.390180 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0de7388d0d6dea149daa66b1d44a9ce5-ca-certs\") pod \"kube-controller-manager-172-234-204-89\" (UID: \"0de7388d0d6dea149daa66b1d44a9ce5\") " pod="kube-system/kube-controller-manager-172-234-204-89" Apr 24 00:16:45.390649 kubelet[2742]: I0424 00:16:45.390265 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0de7388d0d6dea149daa66b1d44a9ce5-flexvolume-dir\") pod \"kube-controller-manager-172-234-204-89\" (UID: \"0de7388d0d6dea149daa66b1d44a9ce5\") " pod="kube-system/kube-controller-manager-172-234-204-89" Apr 24 00:16:45.390649 kubelet[2742]: I0424 00:16:45.390364 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0de7388d0d6dea149daa66b1d44a9ce5-kubeconfig\") pod \"kube-controller-manager-172-234-204-89\" (UID: \"0de7388d0d6dea149daa66b1d44a9ce5\") " pod="kube-system/kube-controller-manager-172-234-204-89" Apr 24 00:16:45.390649 kubelet[2742]: I0424 00:16:45.390399 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0de7388d0d6dea149daa66b1d44a9ce5-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-204-89\" (UID: \"0de7388d0d6dea149daa66b1d44a9ce5\") " pod="kube-system/kube-controller-manager-172-234-204-89" Apr 24 00:16:45.390649 kubelet[2742]: I0424 00:16:45.390430 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3688cea22db0f68b827cf9f86019cd82-ca-certs\") pod \"kube-apiserver-172-234-204-89\" (UID: \"3688cea22db0f68b827cf9f86019cd82\") " pod="kube-system/kube-apiserver-172-234-204-89" Apr 24 00:16:45.390649 kubelet[2742]: I0424 00:16:45.390449 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3688cea22db0f68b827cf9f86019cd82-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-204-89\" (UID: \"3688cea22db0f68b827cf9f86019cd82\") " pod="kube-system/kube-apiserver-172-234-204-89" Apr 24 00:16:45.390779 kubelet[2742]: I0424 00:16:45.390465 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0de7388d0d6dea149daa66b1d44a9ce5-k8s-certs\") pod \"kube-controller-manager-172-234-204-89\" (UID: \"0de7388d0d6dea149daa66b1d44a9ce5\") " pod="kube-system/kube-controller-manager-172-234-204-89" Apr 24 00:16:45.390779 kubelet[2742]: I0424 00:16:45.390481 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f84a9f55f2001a746b37eda5afacf938-kubeconfig\") pod \"kube-scheduler-172-234-204-89\" (UID: \"f84a9f55f2001a746b37eda5afacf938\") " pod="kube-system/kube-scheduler-172-234-204-89" Apr 24 00:16:45.390779 kubelet[2742]: I0424 00:16:45.390495 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3688cea22db0f68b827cf9f86019cd82-k8s-certs\") pod \"kube-apiserver-172-234-204-89\" (UID: \"3688cea22db0f68b827cf9f86019cd82\") " pod="kube-system/kube-apiserver-172-234-204-89" Apr 24 00:16:45.486334 sudo[2780]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 24 00:16:45.487028 sudo[2780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 24 00:16:45.613452 kubelet[2742]: E0424 00:16:45.613414 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:45.613695 kubelet[2742]: E0424 00:16:45.613677 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:45.613819 kubelet[2742]: E0424 00:16:45.613796 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:45.804748 sudo[2780]: pam_unix(sudo:session): session closed for user root Apr 24 00:16:46.167036 kubelet[2742]: I0424 00:16:46.166811 2742 apiserver.go:52] "Watching apiserver" Apr 24 00:16:46.189526 kubelet[2742]: I0424 00:16:46.189488 2742 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 24 00:16:46.237114 kubelet[2742]: E0424 00:16:46.236775 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:46.237435 kubelet[2742]: I0424 00:16:46.237416 2742 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-204-89" Apr 24 00:16:46.237826 kubelet[2742]: E0424 00:16:46.237810 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:46.247339 kubelet[2742]: E0424 00:16:46.247313 2742 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-204-89\" already exists" pod="kube-system/kube-scheduler-172-234-204-89" Apr 24 00:16:46.247461 kubelet[2742]: E0424 00:16:46.247431 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:46.282401 kubelet[2742]: I0424 00:16:46.282255 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-234-204-89" podStartSLOduration=3.282241585 podStartE2EDuration="3.282241585s" podCreationTimestamp="2026-04-24 00:16:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:16:46.281858755 +0000 UTC m=+1.180962212" watchObservedRunningTime="2026-04-24 00:16:46.282241585 +0000 UTC m=+1.181345042" Apr 24 00:16:46.289300 kubelet[2742]: I0424 00:16:46.289016 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-234-204-89" podStartSLOduration=3.289001012 podStartE2EDuration="3.289001012s" podCreationTimestamp="2026-04-24 00:16:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:16:46.288994582 +0000 UTC m=+1.188098049" watchObservedRunningTime="2026-04-24 00:16:46.289001012 +0000 UTC m=+1.188104479" Apr 24 00:16:46.301708 kubelet[2742]: I0424 00:16:46.301571 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-234-204-89" podStartSLOduration=3.301561755 podStartE2EDuration="3.301561755s" podCreationTimestamp="2026-04-24 00:16:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:16:46.295214818 +0000 UTC m=+1.194318275" watchObservedRunningTime="2026-04-24 00:16:46.301561755 +0000 UTC m=+1.200665212" Apr 24 00:16:47.026680 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 24 00:16:47.154678 sudo[1800]: pam_unix(sudo:session): session closed for user root Apr 24 00:16:47.233842 kubelet[2742]: E0424 00:16:47.233808 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:47.234311 kubelet[2742]: E0424 00:16:47.234267 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:47.250636 sshd[1799]: Connection closed by 20.229.252.112 port 57594 Apr 24 00:16:47.251607 sshd-session[1796]: pam_unix(sshd:session): session closed for user core Apr 24 00:16:47.256234 systemd-logind[1532]: Session 7 logged out. Waiting for processes to exit. Apr 24 00:16:47.257256 systemd[1]: sshd@6-172.234.204.89:22-20.229.252.112:57594.service: Deactivated successfully. Apr 24 00:16:47.259448 systemd[1]: session-7.scope: Deactivated successfully. Apr 24 00:16:47.259691 systemd[1]: session-7.scope: Consumed 4.570s CPU time, 273.7M memory peak. Apr 24 00:16:47.261804 systemd-logind[1532]: Removed session 7. Apr 24 00:16:49.935712 kubelet[2742]: E0424 00:16:49.935083 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:50.024529 kubelet[2742]: I0424 00:16:50.024501 2742 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 24 00:16:50.024968 containerd[1553]: time="2026-04-24T00:16:50.024939486Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 24 00:16:50.025562 kubelet[2742]: I0424 00:16:50.025220 2742 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 24 00:16:51.114398 systemd[1]: Created slice kubepods-besteffort-pod53b614ef_1037_4098_9450_3314ed59bde1.slice - libcontainer container kubepods-besteffort-pod53b614ef_1037_4098_9450_3314ed59bde1.slice. Apr 24 00:16:51.126551 kubelet[2742]: I0424 00:16:51.126516 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53b614ef-1037-4098-9450-3314ed59bde1-lib-modules\") pod \"kube-proxy-b28ps\" (UID: \"53b614ef-1037-4098-9450-3314ed59bde1\") " pod="kube-system/kube-proxy-b28ps" Apr 24 00:16:51.126944 kubelet[2742]: I0424 00:16:51.126575 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/53b614ef-1037-4098-9450-3314ed59bde1-kube-proxy\") pod \"kube-proxy-b28ps\" (UID: \"53b614ef-1037-4098-9450-3314ed59bde1\") " pod="kube-system/kube-proxy-b28ps" Apr 24 00:16:51.126944 kubelet[2742]: I0424 00:16:51.126592 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p748\" (UniqueName: \"kubernetes.io/projected/53b614ef-1037-4098-9450-3314ed59bde1-kube-api-access-8p748\") pod \"kube-proxy-b28ps\" (UID: \"53b614ef-1037-4098-9450-3314ed59bde1\") " pod="kube-system/kube-proxy-b28ps" Apr 24 00:16:51.126944 kubelet[2742]: I0424 00:16:51.126608 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53b614ef-1037-4098-9450-3314ed59bde1-xtables-lock\") pod \"kube-proxy-b28ps\" (UID: \"53b614ef-1037-4098-9450-3314ed59bde1\") " pod="kube-system/kube-proxy-b28ps" Apr 24 00:16:51.139268 systemd[1]: Created slice kubepods-burstable-pode11a6d9c_31e0_4d72_92a5_d5304f3f5bdc.slice - libcontainer container kubepods-burstable-pode11a6d9c_31e0_4d72_92a5_d5304f3f5bdc.slice. Apr 24 00:16:51.228684 kubelet[2742]: I0424 00:16:51.227622 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-etc-cni-netd\") pod \"cilium-gw5cm\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " pod="kube-system/cilium-gw5cm" Apr 24 00:16:51.228684 kubelet[2742]: I0424 00:16:51.227654 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-xtables-lock\") pod \"cilium-gw5cm\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " pod="kube-system/cilium-gw5cm" Apr 24 00:16:51.228684 kubelet[2742]: I0424 00:16:51.227670 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-host-proc-sys-net\") pod \"cilium-gw5cm\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " pod="kube-system/cilium-gw5cm" Apr 24 00:16:51.228684 kubelet[2742]: I0424 00:16:51.227707 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-clustermesh-secrets\") pod \"cilium-gw5cm\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " pod="kube-system/cilium-gw5cm" Apr 24 00:16:51.228684 kubelet[2742]: I0424 00:16:51.227954 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-cilium-run\") pod \"cilium-gw5cm\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " pod="kube-system/cilium-gw5cm" Apr 24 00:16:51.228684 kubelet[2742]: I0424 00:16:51.227984 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-bpf-maps\") pod \"cilium-gw5cm\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " pod="kube-system/cilium-gw5cm" Apr 24 00:16:51.228902 kubelet[2742]: I0424 00:16:51.228000 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-hostproc\") pod \"cilium-gw5cm\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " pod="kube-system/cilium-gw5cm" Apr 24 00:16:51.228902 kubelet[2742]: I0424 00:16:51.228027 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-cilium-config-path\") pod \"cilium-gw5cm\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " pod="kube-system/cilium-gw5cm" Apr 24 00:16:51.228902 kubelet[2742]: I0424 00:16:51.228047 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-cilium-cgroup\") pod \"cilium-gw5cm\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " pod="kube-system/cilium-gw5cm" Apr 24 00:16:51.228902 kubelet[2742]: I0424 00:16:51.228064 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-cni-path\") pod \"cilium-gw5cm\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " pod="kube-system/cilium-gw5cm" Apr 24 00:16:51.228902 kubelet[2742]: I0424 00:16:51.228090 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-lib-modules\") pod \"cilium-gw5cm\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " pod="kube-system/cilium-gw5cm" Apr 24 00:16:51.228902 kubelet[2742]: I0424 00:16:51.228105 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-host-proc-sys-kernel\") pod \"cilium-gw5cm\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " pod="kube-system/cilium-gw5cm" Apr 24 00:16:51.229033 kubelet[2742]: I0424 00:16:51.228120 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-hubble-tls\") pod \"cilium-gw5cm\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " pod="kube-system/cilium-gw5cm" Apr 24 00:16:51.229033 kubelet[2742]: I0424 00:16:51.228134 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v592q\" (UniqueName: \"kubernetes.io/projected/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-kube-api-access-v592q\") pod \"cilium-gw5cm\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " pod="kube-system/cilium-gw5cm" Apr 24 00:16:51.269320 systemd[1]: Created slice kubepods-besteffort-pod5665695f_fb75_4206_9467_1b6af4c145a0.slice - libcontainer container kubepods-besteffort-pod5665695f_fb75_4206_9467_1b6af4c145a0.slice. Apr 24 00:16:51.328910 kubelet[2742]: I0424 00:16:51.328541 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm8q7\" (UniqueName: \"kubernetes.io/projected/5665695f-fb75-4206-9467-1b6af4c145a0-kube-api-access-sm8q7\") pod \"cilium-operator-6f9c7c5859-g5j4w\" (UID: \"5665695f-fb75-4206-9467-1b6af4c145a0\") " pod="kube-system/cilium-operator-6f9c7c5859-g5j4w" Apr 24 00:16:51.328910 kubelet[2742]: I0424 00:16:51.328611 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5665695f-fb75-4206-9467-1b6af4c145a0-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-g5j4w\" (UID: \"5665695f-fb75-4206-9467-1b6af4c145a0\") " pod="kube-system/cilium-operator-6f9c7c5859-g5j4w" Apr 24 00:16:51.423773 kubelet[2742]: E0424 00:16:51.423666 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:51.425033 containerd[1553]: time="2026-04-24T00:16:51.424941072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b28ps,Uid:53b614ef-1037-4098-9450-3314ed59bde1,Namespace:kube-system,Attempt:0,}" Apr 24 00:16:51.449390 kubelet[2742]: E0424 00:16:51.447291 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:51.452661 containerd[1553]: time="2026-04-24T00:16:51.452629819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gw5cm,Uid:e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc,Namespace:kube-system,Attempt:0,}" Apr 24 00:16:51.462200 containerd[1553]: time="2026-04-24T00:16:51.462160147Z" level=info msg="connecting to shim 809c464f7ec82c2fb5741ca5f30cd1463f601f24ddfaf65be4bfd9aca39c4685" address="unix:///run/containerd/s/4723398dd98a76e149d952d0dc021383df00d090a54cbae17c73fe0b8e52f5b5" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:16:51.479129 containerd[1553]: time="2026-04-24T00:16:51.478967099Z" level=info msg="connecting to shim 5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84" address="unix:///run/containerd/s/2bcbe601ad226b127b56304f625617de906a730f383ce0f6a3f5d2ef76d37100" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:16:51.493431 systemd[1]: Started cri-containerd-809c464f7ec82c2fb5741ca5f30cd1463f601f24ddfaf65be4bfd9aca39c4685.scope - libcontainer container 809c464f7ec82c2fb5741ca5f30cd1463f601f24ddfaf65be4bfd9aca39c4685. Apr 24 00:16:51.515397 systemd[1]: Started cri-containerd-5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84.scope - libcontainer container 5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84. Apr 24 00:16:51.548714 containerd[1553]: time="2026-04-24T00:16:51.548643190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b28ps,Uid:53b614ef-1037-4098-9450-3314ed59bde1,Namespace:kube-system,Attempt:0,} returns sandbox id \"809c464f7ec82c2fb5741ca5f30cd1463f601f24ddfaf65be4bfd9aca39c4685\"" Apr 24 00:16:51.549968 kubelet[2742]: E0424 00:16:51.549876 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:51.553646 containerd[1553]: time="2026-04-24T00:16:51.553561090Z" level=info msg="CreateContainer within sandbox \"809c464f7ec82c2fb5741ca5f30cd1463f601f24ddfaf65be4bfd9aca39c4685\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 24 00:16:51.556850 containerd[1553]: time="2026-04-24T00:16:51.556527138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gw5cm,Uid:e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84\"" Apr 24 00:16:51.558611 kubelet[2742]: E0424 00:16:51.558521 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:51.560357 containerd[1553]: time="2026-04-24T00:16:51.560243591Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 24 00:16:51.564858 containerd[1553]: time="2026-04-24T00:16:51.563854023Z" level=info msg="Container 0d81811b2eccaa4233e102c885cd151b2fe2e88e816513f37f0700486c969558: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:16:51.569198 containerd[1553]: time="2026-04-24T00:16:51.569168585Z" level=info msg="CreateContainer within sandbox \"809c464f7ec82c2fb5741ca5f30cd1463f601f24ddfaf65be4bfd9aca39c4685\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0d81811b2eccaa4233e102c885cd151b2fe2e88e816513f37f0700486c969558\"" Apr 24 00:16:51.569973 containerd[1553]: time="2026-04-24T00:16:51.569818299Z" level=info msg="StartContainer for \"0d81811b2eccaa4233e102c885cd151b2fe2e88e816513f37f0700486c969558\"" Apr 24 00:16:51.573860 containerd[1553]: time="2026-04-24T00:16:51.573804783Z" level=info msg="connecting to shim 0d81811b2eccaa4233e102c885cd151b2fe2e88e816513f37f0700486c969558" address="unix:///run/containerd/s/4723398dd98a76e149d952d0dc021383df00d090a54cbae17c73fe0b8e52f5b5" protocol=ttrpc version=3 Apr 24 00:16:51.574108 kubelet[2742]: E0424 00:16:51.574054 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:51.575022 containerd[1553]: time="2026-04-24T00:16:51.574989950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-g5j4w,Uid:5665695f-fb75-4206-9467-1b6af4c145a0,Namespace:kube-system,Attempt:0,}" Apr 24 00:16:51.596642 containerd[1553]: time="2026-04-24T00:16:51.596578140Z" level=info msg="connecting to shim 1749da598afb3598633bb83356b9bdb20d815a7d1d9d51243b8ea70a6264d46c" address="unix:///run/containerd/s/9d485f247648f06ac76524b97e2b7bb75fcd922b89bf69b5541cc89d349834cc" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:16:51.612673 systemd[1]: Started cri-containerd-0d81811b2eccaa4233e102c885cd151b2fe2e88e816513f37f0700486c969558.scope - libcontainer container 0d81811b2eccaa4233e102c885cd151b2fe2e88e816513f37f0700486c969558. Apr 24 00:16:51.627394 systemd[1]: Started cri-containerd-1749da598afb3598633bb83356b9bdb20d815a7d1d9d51243b8ea70a6264d46c.scope - libcontainer container 1749da598afb3598633bb83356b9bdb20d815a7d1d9d51243b8ea70a6264d46c. Apr 24 00:16:51.701903 containerd[1553]: time="2026-04-24T00:16:51.701408055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-g5j4w,Uid:5665695f-fb75-4206-9467-1b6af4c145a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"1749da598afb3598633bb83356b9bdb20d815a7d1d9d51243b8ea70a6264d46c\"" Apr 24 00:16:51.703220 kubelet[2742]: E0424 00:16:51.703177 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:51.706489 containerd[1553]: time="2026-04-24T00:16:51.706442896Z" level=info msg="StartContainer for \"0d81811b2eccaa4233e102c885cd151b2fe2e88e816513f37f0700486c969558\" returns successfully" Apr 24 00:16:52.252422 kubelet[2742]: E0424 00:16:52.251302 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:52.986956 kubelet[2742]: E0424 00:16:52.986918 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:53.004696 kubelet[2742]: I0424 00:16:53.004517 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b28ps" podStartSLOduration=2.004504908 podStartE2EDuration="2.004504908s" podCreationTimestamp="2026-04-24 00:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:16:52.266486903 +0000 UTC m=+7.165590360" watchObservedRunningTime="2026-04-24 00:16:53.004504908 +0000 UTC m=+7.903608365" Apr 24 00:16:53.252737 kubelet[2742]: E0424 00:16:53.252556 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:54.037195 kubelet[2742]: E0424 00:16:54.037114 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:54.254590 kubelet[2742]: E0424 00:16:54.253931 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:54.254590 kubelet[2742]: E0424 00:16:54.254445 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:54.771630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1288282711.mount: Deactivated successfully. Apr 24 00:16:56.330191 containerd[1553]: time="2026-04-24T00:16:56.330092015Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:56.331045 containerd[1553]: time="2026-04-24T00:16:56.330780637Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 24 00:16:56.332302 containerd[1553]: time="2026-04-24T00:16:56.331394001Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:56.333306 containerd[1553]: time="2026-04-24T00:16:56.333260309Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 4.772988228s" Apr 24 00:16:56.333413 containerd[1553]: time="2026-04-24T00:16:56.333390230Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 24 00:16:56.335042 containerd[1553]: time="2026-04-24T00:16:56.335019428Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 24 00:16:56.338317 containerd[1553]: time="2026-04-24T00:16:56.338239763Z" level=info msg="CreateContainer within sandbox \"5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 24 00:16:56.345324 containerd[1553]: time="2026-04-24T00:16:56.345251485Z" level=info msg="Container 7026fdafbd5b0dfca0008d65526be92bd6b02e0b41e77556a804d0906f69ec86: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:16:56.351782 containerd[1553]: time="2026-04-24T00:16:56.351739855Z" level=info msg="CreateContainer within sandbox \"5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7026fdafbd5b0dfca0008d65526be92bd6b02e0b41e77556a804d0906f69ec86\"" Apr 24 00:16:56.352517 containerd[1553]: time="2026-04-24T00:16:56.352409568Z" level=info msg="StartContainer for \"7026fdafbd5b0dfca0008d65526be92bd6b02e0b41e77556a804d0906f69ec86\"" Apr 24 00:16:56.353692 containerd[1553]: time="2026-04-24T00:16:56.353623634Z" level=info msg="connecting to shim 7026fdafbd5b0dfca0008d65526be92bd6b02e0b41e77556a804d0906f69ec86" address="unix:///run/containerd/s/2bcbe601ad226b127b56304f625617de906a730f383ce0f6a3f5d2ef76d37100" protocol=ttrpc version=3 Apr 24 00:16:56.379369 systemd[1]: Started cri-containerd-7026fdafbd5b0dfca0008d65526be92bd6b02e0b41e77556a804d0906f69ec86.scope - libcontainer container 7026fdafbd5b0dfca0008d65526be92bd6b02e0b41e77556a804d0906f69ec86. Apr 24 00:16:56.409556 containerd[1553]: time="2026-04-24T00:16:56.409519895Z" level=info msg="StartContainer for \"7026fdafbd5b0dfca0008d65526be92bd6b02e0b41e77556a804d0906f69ec86\" returns successfully" Apr 24 00:16:56.421120 systemd[1]: cri-containerd-7026fdafbd5b0dfca0008d65526be92bd6b02e0b41e77556a804d0906f69ec86.scope: Deactivated successfully. Apr 24 00:16:56.423268 containerd[1553]: time="2026-04-24T00:16:56.423240149Z" level=info msg="received container exit event container_id:\"7026fdafbd5b0dfca0008d65526be92bd6b02e0b41e77556a804d0906f69ec86\" id:\"7026fdafbd5b0dfca0008d65526be92bd6b02e0b41e77556a804d0906f69ec86\" pid:3163 exited_at:{seconds:1776989816 nanos:422660646}" Apr 24 00:16:56.449607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7026fdafbd5b0dfca0008d65526be92bd6b02e0b41e77556a804d0906f69ec86-rootfs.mount: Deactivated successfully. Apr 24 00:16:57.268558 kubelet[2742]: E0424 00:16:57.268530 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:57.277745 containerd[1553]: time="2026-04-24T00:16:57.277700188Z" level=info msg="CreateContainer within sandbox \"5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 24 00:16:57.287893 containerd[1553]: time="2026-04-24T00:16:57.287407831Z" level=info msg="Container b32b0773041f8a20ab3531ac1994553f72f1389be36919527cb56c3b3f2bf77b: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:16:57.297045 containerd[1553]: time="2026-04-24T00:16:57.297013534Z" level=info msg="CreateContainer within sandbox \"5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b32b0773041f8a20ab3531ac1994553f72f1389be36919527cb56c3b3f2bf77b\"" Apr 24 00:16:57.298052 containerd[1553]: time="2026-04-24T00:16:57.298025938Z" level=info msg="StartContainer for \"b32b0773041f8a20ab3531ac1994553f72f1389be36919527cb56c3b3f2bf77b\"" Apr 24 00:16:57.299633 containerd[1553]: time="2026-04-24T00:16:57.299612165Z" level=info msg="connecting to shim b32b0773041f8a20ab3531ac1994553f72f1389be36919527cb56c3b3f2bf77b" address="unix:///run/containerd/s/2bcbe601ad226b127b56304f625617de906a730f383ce0f6a3f5d2ef76d37100" protocol=ttrpc version=3 Apr 24 00:16:57.321462 systemd[1]: Started cri-containerd-b32b0773041f8a20ab3531ac1994553f72f1389be36919527cb56c3b3f2bf77b.scope - libcontainer container b32b0773041f8a20ab3531ac1994553f72f1389be36919527cb56c3b3f2bf77b. Apr 24 00:16:57.364743 containerd[1553]: time="2026-04-24T00:16:57.364706733Z" level=info msg="StartContainer for \"b32b0773041f8a20ab3531ac1994553f72f1389be36919527cb56c3b3f2bf77b\" returns successfully" Apr 24 00:16:57.381543 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 00:16:57.381749 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 00:16:57.384601 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 24 00:16:57.387162 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 00:16:57.390201 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 24 00:16:57.391350 systemd[1]: cri-containerd-b32b0773041f8a20ab3531ac1994553f72f1389be36919527cb56c3b3f2bf77b.scope: Deactivated successfully. Apr 24 00:16:57.397456 containerd[1553]: time="2026-04-24T00:16:57.397421608Z" level=info msg="received container exit event container_id:\"b32b0773041f8a20ab3531ac1994553f72f1389be36919527cb56c3b3f2bf77b\" id:\"b32b0773041f8a20ab3531ac1994553f72f1389be36919527cb56c3b3f2bf77b\" pid:3222 exited_at:{seconds:1776989817 nanos:392507687}" Apr 24 00:16:57.413748 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 00:16:57.435137 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b32b0773041f8a20ab3531ac1994553f72f1389be36919527cb56c3b3f2bf77b-rootfs.mount: Deactivated successfully. Apr 24 00:16:57.593553 containerd[1553]: time="2026-04-24T00:16:57.593369607Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:57.594770 containerd[1553]: time="2026-04-24T00:16:57.594655342Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 24 00:16:57.596762 containerd[1553]: time="2026-04-24T00:16:57.595429426Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:57.597433 containerd[1553]: time="2026-04-24T00:16:57.597399215Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.262235237s" Apr 24 00:16:57.597498 containerd[1553]: time="2026-04-24T00:16:57.597436045Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 24 00:16:57.602438 containerd[1553]: time="2026-04-24T00:16:57.602267767Z" level=info msg="CreateContainer within sandbox \"1749da598afb3598633bb83356b9bdb20d815a7d1d9d51243b8ea70a6264d46c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 24 00:16:57.615577 containerd[1553]: time="2026-04-24T00:16:57.615143034Z" level=info msg="Container 944c164208d21edce7963072c4786fea359971f317ec7d19e274837af9aab775: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:16:57.621659 containerd[1553]: time="2026-04-24T00:16:57.621620062Z" level=info msg="CreateContainer within sandbox \"1749da598afb3598633bb83356b9bdb20d815a7d1d9d51243b8ea70a6264d46c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"944c164208d21edce7963072c4786fea359971f317ec7d19e274837af9aab775\"" Apr 24 00:16:57.623154 containerd[1553]: time="2026-04-24T00:16:57.623025258Z" level=info msg="StartContainer for \"944c164208d21edce7963072c4786fea359971f317ec7d19e274837af9aab775\"" Apr 24 00:16:57.624213 containerd[1553]: time="2026-04-24T00:16:57.624182934Z" level=info msg="connecting to shim 944c164208d21edce7963072c4786fea359971f317ec7d19e274837af9aab775" address="unix:///run/containerd/s/9d485f247648f06ac76524b97e2b7bb75fcd922b89bf69b5541cc89d349834cc" protocol=ttrpc version=3 Apr 24 00:16:57.645461 systemd[1]: Started cri-containerd-944c164208d21edce7963072c4786fea359971f317ec7d19e274837af9aab775.scope - libcontainer container 944c164208d21edce7963072c4786fea359971f317ec7d19e274837af9aab775. Apr 24 00:16:57.689947 containerd[1553]: time="2026-04-24T00:16:57.689865564Z" level=info msg="StartContainer for \"944c164208d21edce7963072c4786fea359971f317ec7d19e274837af9aab775\" returns successfully" Apr 24 00:16:58.274403 kubelet[2742]: E0424 00:16:58.274370 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:58.278608 kubelet[2742]: E0424 00:16:58.278574 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:58.282352 containerd[1553]: time="2026-04-24T00:16:58.282302340Z" level=info msg="CreateContainer within sandbox \"5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 24 00:16:58.299471 containerd[1553]: time="2026-04-24T00:16:58.299420362Z" level=info msg="Container a8191e96deca7e64df827c17bde17173c39a4dd252a29d81ce20f1bf6dc09b5a: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:16:58.351140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount80766654.mount: Deactivated successfully. Apr 24 00:16:58.352994 containerd[1553]: time="2026-04-24T00:16:58.352908887Z" level=info msg="CreateContainer within sandbox \"5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a8191e96deca7e64df827c17bde17173c39a4dd252a29d81ce20f1bf6dc09b5a\"" Apr 24 00:16:58.358033 containerd[1553]: time="2026-04-24T00:16:58.357994189Z" level=info msg="StartContainer for \"a8191e96deca7e64df827c17bde17173c39a4dd252a29d81ce20f1bf6dc09b5a\"" Apr 24 00:16:58.360733 containerd[1553]: time="2026-04-24T00:16:58.360690120Z" level=info msg="connecting to shim a8191e96deca7e64df827c17bde17173c39a4dd252a29d81ce20f1bf6dc09b5a" address="unix:///run/containerd/s/2bcbe601ad226b127b56304f625617de906a730f383ce0f6a3f5d2ef76d37100" protocol=ttrpc version=3 Apr 24 00:16:58.411916 systemd[1]: Started cri-containerd-a8191e96deca7e64df827c17bde17173c39a4dd252a29d81ce20f1bf6dc09b5a.scope - libcontainer container a8191e96deca7e64df827c17bde17173c39a4dd252a29d81ce20f1bf6dc09b5a. Apr 24 00:16:58.486231 kubelet[2742]: I0424 00:16:58.486162 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-g5j4w" podStartSLOduration=1.592195565 podStartE2EDuration="7.486112529s" podCreationTimestamp="2026-04-24 00:16:51 +0000 UTC" firstStartedPulling="2026-04-24 00:16:51.704897087 +0000 UTC m=+6.604000554" lastFinishedPulling="2026-04-24 00:16:57.598814051 +0000 UTC m=+12.497917518" observedRunningTime="2026-04-24 00:16:58.478564388 +0000 UTC m=+13.377667845" watchObservedRunningTime="2026-04-24 00:16:58.486112529 +0000 UTC m=+13.385215986" Apr 24 00:16:58.622383 systemd[1]: cri-containerd-a8191e96deca7e64df827c17bde17173c39a4dd252a29d81ce20f1bf6dc09b5a.scope: Deactivated successfully. Apr 24 00:16:58.630615 containerd[1553]: time="2026-04-24T00:16:58.630574709Z" level=info msg="received container exit event container_id:\"a8191e96deca7e64df827c17bde17173c39a4dd252a29d81ce20f1bf6dc09b5a\" id:\"a8191e96deca7e64df827c17bde17173c39a4dd252a29d81ce20f1bf6dc09b5a\" pid:3306 exited_at:{seconds:1776989818 nanos:626131789}" Apr 24 00:16:58.649302 containerd[1553]: time="2026-04-24T00:16:58.649042636Z" level=info msg="StartContainer for \"a8191e96deca7e64df827c17bde17173c39a4dd252a29d81ce20f1bf6dc09b5a\" returns successfully" Apr 24 00:16:58.687702 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8191e96deca7e64df827c17bde17173c39a4dd252a29d81ce20f1bf6dc09b5a-rootfs.mount: Deactivated successfully. Apr 24 00:16:59.283002 kubelet[2742]: E0424 00:16:59.282850 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:59.284824 kubelet[2742]: E0424 00:16:59.283384 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:16:59.288659 containerd[1553]: time="2026-04-24T00:16:59.288589246Z" level=info msg="CreateContainer within sandbox \"5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 24 00:16:59.299977 containerd[1553]: time="2026-04-24T00:16:59.299673741Z" level=info msg="Container 13fba06fb545cb528f469b0775a1f73365e296693e47727e52798220932334e9: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:16:59.304983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2059615899.mount: Deactivated successfully. Apr 24 00:16:59.309797 containerd[1553]: time="2026-04-24T00:16:59.309775111Z" level=info msg="CreateContainer within sandbox \"5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"13fba06fb545cb528f469b0775a1f73365e296693e47727e52798220932334e9\"" Apr 24 00:16:59.312441 containerd[1553]: time="2026-04-24T00:16:59.312302161Z" level=info msg="StartContainer for \"13fba06fb545cb528f469b0775a1f73365e296693e47727e52798220932334e9\"" Apr 24 00:16:59.314581 containerd[1553]: time="2026-04-24T00:16:59.314562281Z" level=info msg="connecting to shim 13fba06fb545cb528f469b0775a1f73365e296693e47727e52798220932334e9" address="unix:///run/containerd/s/2bcbe601ad226b127b56304f625617de906a730f383ce0f6a3f5d2ef76d37100" protocol=ttrpc version=3 Apr 24 00:16:59.338428 systemd[1]: Started cri-containerd-13fba06fb545cb528f469b0775a1f73365e296693e47727e52798220932334e9.scope - libcontainer container 13fba06fb545cb528f469b0775a1f73365e296693e47727e52798220932334e9. Apr 24 00:16:59.387619 systemd[1]: cri-containerd-13fba06fb545cb528f469b0775a1f73365e296693e47727e52798220932334e9.scope: Deactivated successfully. Apr 24 00:16:59.389028 containerd[1553]: time="2026-04-24T00:16:59.388586708Z" level=info msg="received container exit event container_id:\"13fba06fb545cb528f469b0775a1f73365e296693e47727e52798220932334e9\" id:\"13fba06fb545cb528f469b0775a1f73365e296693e47727e52798220932334e9\" pid:3345 exited_at:{seconds:1776989819 nanos:388044155}" Apr 24 00:16:59.404162 containerd[1553]: time="2026-04-24T00:16:59.404135220Z" level=info msg="StartContainer for \"13fba06fb545cb528f469b0775a1f73365e296693e47727e52798220932334e9\" returns successfully" Apr 24 00:16:59.429326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13fba06fb545cb528f469b0775a1f73365e296693e47727e52798220932334e9-rootfs.mount: Deactivated successfully. Apr 24 00:16:59.945303 kubelet[2742]: E0424 00:16:59.945185 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:17:00.287861 kubelet[2742]: E0424 00:17:00.287759 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:17:00.288708 kubelet[2742]: E0424 00:17:00.288682 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:17:00.298539 containerd[1553]: time="2026-04-24T00:17:00.298503385Z" level=info msg="CreateContainer within sandbox \"5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 24 00:17:00.315466 containerd[1553]: time="2026-04-24T00:17:00.314981579Z" level=info msg="Container 809e04db7d9c390a83061b1292a2f04a60b67f503f659c1545c10e30befc04d1: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:17:00.321579 containerd[1553]: time="2026-04-24T00:17:00.321542543Z" level=info msg="CreateContainer within sandbox \"5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"809e04db7d9c390a83061b1292a2f04a60b67f503f659c1545c10e30befc04d1\"" Apr 24 00:17:00.322293 containerd[1553]: time="2026-04-24T00:17:00.322141496Z" level=info msg="StartContainer for \"809e04db7d9c390a83061b1292a2f04a60b67f503f659c1545c10e30befc04d1\"" Apr 24 00:17:00.323460 containerd[1553]: time="2026-04-24T00:17:00.323429671Z" level=info msg="connecting to shim 809e04db7d9c390a83061b1292a2f04a60b67f503f659c1545c10e30befc04d1" address="unix:///run/containerd/s/2bcbe601ad226b127b56304f625617de906a730f383ce0f6a3f5d2ef76d37100" protocol=ttrpc version=3 Apr 24 00:17:00.350464 systemd[1]: Started cri-containerd-809e04db7d9c390a83061b1292a2f04a60b67f503f659c1545c10e30befc04d1.scope - libcontainer container 809e04db7d9c390a83061b1292a2f04a60b67f503f659c1545c10e30befc04d1. Apr 24 00:17:00.395021 containerd[1553]: time="2026-04-24T00:17:00.394229202Z" level=info msg="StartContainer for \"809e04db7d9c390a83061b1292a2f04a60b67f503f659c1545c10e30befc04d1\" returns successfully" Apr 24 00:17:00.548944 kubelet[2742]: I0424 00:17:00.548831 2742 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 24 00:17:00.589413 systemd[1]: Created slice kubepods-burstable-pod5b64507c_0b1f_45f1_b73f_d6bab8d4e2c2.slice - libcontainer container kubepods-burstable-pod5b64507c_0b1f_45f1_b73f_d6bab8d4e2c2.slice. Apr 24 00:17:00.596995 systemd[1]: Created slice kubepods-burstable-podbfc401c3_7fd8_4917_8006_bb1fee76111b.slice - libcontainer container kubepods-burstable-podbfc401c3_7fd8_4917_8006_bb1fee76111b.slice. Apr 24 00:17:00.598840 kubelet[2742]: I0424 00:17:00.598805 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d25z\" (UniqueName: \"kubernetes.io/projected/5b64507c-0b1f-45f1-b73f-d6bab8d4e2c2-kube-api-access-7d25z\") pod \"coredns-66bc5c9577-4vvn2\" (UID: \"5b64507c-0b1f-45f1-b73f-d6bab8d4e2c2\") " pod="kube-system/coredns-66bc5c9577-4vvn2" Apr 24 00:17:00.599983 kubelet[2742]: I0424 00:17:00.599945 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqczr\" (UniqueName: \"kubernetes.io/projected/bfc401c3-7fd8-4917-8006-bb1fee76111b-kube-api-access-lqczr\") pod \"coredns-66bc5c9577-6d595\" (UID: \"bfc401c3-7fd8-4917-8006-bb1fee76111b\") " pod="kube-system/coredns-66bc5c9577-6d595" Apr 24 00:17:00.599983 kubelet[2742]: I0424 00:17:00.599974 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfc401c3-7fd8-4917-8006-bb1fee76111b-config-volume\") pod \"coredns-66bc5c9577-6d595\" (UID: \"bfc401c3-7fd8-4917-8006-bb1fee76111b\") " pod="kube-system/coredns-66bc5c9577-6d595" Apr 24 00:17:00.600091 kubelet[2742]: I0424 00:17:00.599992 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b64507c-0b1f-45f1-b73f-d6bab8d4e2c2-config-volume\") pod \"coredns-66bc5c9577-4vvn2\" (UID: \"5b64507c-0b1f-45f1-b73f-d6bab8d4e2c2\") " pod="kube-system/coredns-66bc5c9577-4vvn2" Apr 24 00:17:00.894642 kubelet[2742]: E0424 00:17:00.894594 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:17:00.896019 containerd[1553]: time="2026-04-24T00:17:00.895721071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4vvn2,Uid:5b64507c-0b1f-45f1-b73f-d6bab8d4e2c2,Namespace:kube-system,Attempt:0,}" Apr 24 00:17:00.907622 kubelet[2742]: E0424 00:17:00.907412 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:17:00.908656 containerd[1553]: time="2026-04-24T00:17:00.908386230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6d595,Uid:bfc401c3-7fd8-4917-8006-bb1fee76111b,Namespace:kube-system,Attempt:0,}" Apr 24 00:17:01.294058 kubelet[2742]: E0424 00:17:01.293957 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:17:01.309136 kubelet[2742]: I0424 00:17:01.309077 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gw5cm" podStartSLOduration=5.534568485 podStartE2EDuration="10.30906241s" podCreationTimestamp="2026-04-24 00:16:51 +0000 UTC" firstStartedPulling="2026-04-24 00:16:51.559834639 +0000 UTC m=+6.458938106" lastFinishedPulling="2026-04-24 00:16:56.334328574 +0000 UTC m=+11.233432031" observedRunningTime="2026-04-24 00:17:01.308345767 +0000 UTC m=+16.207449244" watchObservedRunningTime="2026-04-24 00:17:01.30906241 +0000 UTC m=+16.208165867" Apr 24 00:17:02.067835 update_engine[1533]: I20260424 00:17:02.067718 1533 update_attempter.cc:509] Updating boot flags... Apr 24 00:17:02.296920 kubelet[2742]: E0424 00:17:02.296760 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:17:02.753561 systemd-networkd[1439]: cilium_host: Link UP Apr 24 00:17:02.755463 systemd-networkd[1439]: cilium_net: Link UP Apr 24 00:17:02.755692 systemd-networkd[1439]: cilium_net: Gained carrier Apr 24 00:17:02.755895 systemd-networkd[1439]: cilium_host: Gained carrier Apr 24 00:17:02.786590 systemd-networkd[1439]: cilium_net: Gained IPv6LL Apr 24 00:17:02.880064 systemd-networkd[1439]: cilium_vxlan: Link UP Apr 24 00:17:02.881326 systemd-networkd[1439]: cilium_vxlan: Gained carrier Apr 24 00:17:03.123346 kernel: NET: Registered PF_ALG protocol family Apr 24 00:17:03.299668 kubelet[2742]: E0424 00:17:03.299621 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:17:03.352664 systemd-networkd[1439]: cilium_host: Gained IPv6LL Apr 24 00:17:03.963168 systemd-networkd[1439]: lxc_health: Link UP Apr 24 00:17:03.970568 systemd-networkd[1439]: lxc_health: Gained carrier Apr 24 00:17:04.447838 systemd-networkd[1439]: lxccb029c8e56d4: Link UP Apr 24 00:17:04.458443 kernel: eth0: renamed from tmp1189a Apr 24 00:17:04.461640 systemd-networkd[1439]: lxccb029c8e56d4: Gained carrier Apr 24 00:17:04.473846 systemd-networkd[1439]: lxc80e634c191e5: Link UP Apr 24 00:17:04.488622 kernel: eth0: renamed from tmp09393 Apr 24 00:17:04.493894 systemd-networkd[1439]: lxc80e634c191e5: Gained carrier Apr 24 00:17:04.504973 systemd-networkd[1439]: cilium_vxlan: Gained IPv6LL Apr 24 00:17:05.210463 systemd-networkd[1439]: lxc_health: Gained IPv6LL Apr 24 00:17:05.444783 kubelet[2742]: E0424 00:17:05.444730 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:17:06.168452 systemd-networkd[1439]: lxccb029c8e56d4: Gained IPv6LL Apr 24 00:17:06.308687 kubelet[2742]: E0424 00:17:06.307517 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:17:06.424681 systemd-networkd[1439]: lxc80e634c191e5: Gained IPv6LL Apr 24 00:17:07.311097 kubelet[2742]: E0424 00:17:07.309731 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:17:08.115737 containerd[1553]: time="2026-04-24T00:17:08.115419229Z" level=info msg="connecting to shim 0939355e4091e35790d12dd58226c020762623dc112ce8cbe68ce2aa0093896d" address="unix:///run/containerd/s/47e182d593d2cf039e94d09ae9c1d9ec23f5fe37085651a337bb2b01823f58cb" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:17:08.159446 systemd[1]: Started cri-containerd-0939355e4091e35790d12dd58226c020762623dc112ce8cbe68ce2aa0093896d.scope - libcontainer container 0939355e4091e35790d12dd58226c020762623dc112ce8cbe68ce2aa0093896d. Apr 24 00:17:08.164917 containerd[1553]: time="2026-04-24T00:17:08.164663972Z" level=info msg="connecting to shim 1189af74ae48ed0783186379e9950891ab6f88bc335e8b2030a79141aea71028" address="unix:///run/containerd/s/4da091595f09b6080899ba461a1fdee9ceed818832a2d4bedb50befa07f36774" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:17:08.207681 systemd[1]: Started cri-containerd-1189af74ae48ed0783186379e9950891ab6f88bc335e8b2030a79141aea71028.scope - libcontainer container 1189af74ae48ed0783186379e9950891ab6f88bc335e8b2030a79141aea71028. Apr 24 00:17:08.297422 containerd[1553]: time="2026-04-24T00:17:08.297344548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6d595,Uid:bfc401c3-7fd8-4917-8006-bb1fee76111b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0939355e4091e35790d12dd58226c020762623dc112ce8cbe68ce2aa0093896d\"" Apr 24 00:17:08.300243 kubelet[2742]: E0424 00:17:08.299655 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:17:08.305839 containerd[1553]: time="2026-04-24T00:17:08.305815041Z" level=info msg="CreateContainer within sandbox \"0939355e4091e35790d12dd58226c020762623dc112ce8cbe68ce2aa0093896d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 00:17:08.320242 containerd[1553]: time="2026-04-24T00:17:08.320206430Z" level=info msg="Container 4ff2a1e902dbc8060dbb7951eb4cf66b29fd968680027cbe758d5295353afcbd: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:17:08.330912 containerd[1553]: time="2026-04-24T00:17:08.330883349Z" level=info msg="CreateContainer within sandbox \"0939355e4091e35790d12dd58226c020762623dc112ce8cbe68ce2aa0093896d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4ff2a1e902dbc8060dbb7951eb4cf66b29fd968680027cbe758d5295353afcbd\"" Apr 24 00:17:08.335031 containerd[1553]: time="2026-04-24T00:17:08.334990740Z" level=info msg="StartContainer for \"4ff2a1e902dbc8060dbb7951eb4cf66b29fd968680027cbe758d5295353afcbd\"" Apr 24 00:17:08.345529 containerd[1553]: time="2026-04-24T00:17:08.345504977Z" level=info msg="connecting to shim 4ff2a1e902dbc8060dbb7951eb4cf66b29fd968680027cbe758d5295353afcbd" address="unix:///run/containerd/s/47e182d593d2cf039e94d09ae9c1d9ec23f5fe37085651a337bb2b01823f58cb" protocol=ttrpc version=3 Apr 24 00:17:08.363817 containerd[1553]: time="2026-04-24T00:17:08.363754517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4vvn2,Uid:5b64507c-0b1f-45f1-b73f-d6bab8d4e2c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"1189af74ae48ed0783186379e9950891ab6f88bc335e8b2030a79141aea71028\"" Apr 24 00:17:08.366970 kubelet[2742]: E0424 00:17:08.366885 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:17:08.374270 systemd[1]: Started cri-containerd-4ff2a1e902dbc8060dbb7951eb4cf66b29fd968680027cbe758d5295353afcbd.scope - libcontainer container 4ff2a1e902dbc8060dbb7951eb4cf66b29fd968680027cbe758d5295353afcbd. Apr 24 00:17:08.375174 containerd[1553]: time="2026-04-24T00:17:08.375004837Z" level=info msg="CreateContainer within sandbox \"1189af74ae48ed0783186379e9950891ab6f88bc335e8b2030a79141aea71028\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 00:17:08.389586 containerd[1553]: time="2026-04-24T00:17:08.389272505Z" level=info msg="Container 71437e0e3b5bfd0f89d2beaba48a2dfb16750d57dcf530f50a860122ad41bd34: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:17:08.395409 containerd[1553]: time="2026-04-24T00:17:08.394647910Z" level=info msg="CreateContainer within sandbox \"1189af74ae48ed0783186379e9950891ab6f88bc335e8b2030a79141aea71028\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"71437e0e3b5bfd0f89d2beaba48a2dfb16750d57dcf530f50a860122ad41bd34\"" Apr 24 00:17:08.396098 containerd[1553]: time="2026-04-24T00:17:08.396071364Z" level=info msg="StartContainer for \"71437e0e3b5bfd0f89d2beaba48a2dfb16750d57dcf530f50a860122ad41bd34\"" Apr 24 00:17:08.397761 containerd[1553]: time="2026-04-24T00:17:08.397731308Z" level=info msg="connecting to shim 71437e0e3b5bfd0f89d2beaba48a2dfb16750d57dcf530f50a860122ad41bd34" address="unix:///run/containerd/s/4da091595f09b6080899ba461a1fdee9ceed818832a2d4bedb50befa07f36774" protocol=ttrpc version=3 Apr 24 00:17:08.428556 systemd[1]: Started cri-containerd-71437e0e3b5bfd0f89d2beaba48a2dfb16750d57dcf530f50a860122ad41bd34.scope - libcontainer container 71437e0e3b5bfd0f89d2beaba48a2dfb16750d57dcf530f50a860122ad41bd34. Apr 24 00:17:08.435831 containerd[1553]: time="2026-04-24T00:17:08.435785121Z" level=info msg="StartContainer for \"4ff2a1e902dbc8060dbb7951eb4cf66b29fd968680027cbe758d5295353afcbd\" returns successfully" Apr 24 00:17:08.485990 containerd[1553]: time="2026-04-24T00:17:08.485952395Z" level=info msg="StartContainer for \"71437e0e3b5bfd0f89d2beaba48a2dfb16750d57dcf530f50a860122ad41bd34\" returns successfully" Apr 24 00:17:09.322935 kubelet[2742]: E0424 00:17:09.322897 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:17:09.326821 kubelet[2742]: E0424 00:17:09.326782 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:17:09.357657 kubelet[2742]: I0424 00:17:09.352968 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6d595" podStartSLOduration=18.352947068 podStartE2EDuration="18.352947068s" podCreationTimestamp="2026-04-24 00:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:17:09.336686245 +0000 UTC m=+24.235789702" watchObservedRunningTime="2026-04-24 00:17:09.352947068 +0000 UTC m=+24.252050525" Apr 24 00:17:09.374546 kubelet[2742]: I0424 00:17:09.374483 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4vvn2" podStartSLOduration=18.374465553 podStartE2EDuration="18.374465553s" podCreationTimestamp="2026-04-24 00:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:17:09.372331227 +0000 UTC m=+24.271434694" watchObservedRunningTime="2026-04-24 00:17:09.374465553 +0000 UTC m=+24.273569010" Apr 24 00:17:10.329305 kubelet[2742]: E0424 00:17:10.329112 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:17:10.329305 kubelet[2742]: E0424 00:17:10.329135 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:17:11.331384 kubelet[2742]: E0424 00:17:11.331351 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:17:11.331934 kubelet[2742]: E0424 00:17:11.331862 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:18:07.207577 kubelet[2742]: E0424 00:18:07.206988 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:18:11.207314 kubelet[2742]: E0424 00:18:11.206782 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:18:13.209516 kubelet[2742]: E0424 00:18:13.207473 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:18:16.206724 kubelet[2742]: E0424 00:18:16.206686 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:18:21.208898 kubelet[2742]: E0424 00:18:21.208759 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:18:21.208898 kubelet[2742]: E0424 00:18:21.208820 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:18:22.206262 kubelet[2742]: E0424 00:18:22.206227 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:18:24.206447 kubelet[2742]: E0424 00:18:24.206410 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:18:30.093048 systemd[1]: Started sshd@7-172.234.204.89:22-65.20.187.47:51040.service - OpenSSH per-connection server daemon (65.20.187.47:51040). Apr 24 00:18:31.555856 sshd[4063]: Invalid user config from 65.20.187.47 port 51040 Apr 24 00:18:31.900726 sshd[4063]: PAM user mismatch Apr 24 00:18:31.902833 systemd[1]: sshd@7-172.234.204.89:22-65.20.187.47:51040.service: Deactivated successfully. Apr 24 00:18:37.568609 systemd[1]: Started sshd@8-172.234.204.89:22-222.117.0.253:48743.service - OpenSSH per-connection server daemon (222.117.0.253:48743). Apr 24 00:18:40.636507 sshd[4070]: Invalid user config from 222.117.0.253 port 48743 Apr 24 00:18:41.241431 sshd[4070]: PAM user mismatch Apr 24 00:18:41.243739 systemd[1]: sshd@8-172.234.204.89:22-222.117.0.253:48743.service: Deactivated successfully. Apr 24 00:18:51.641569 systemd[1]: Started sshd@9-172.234.204.89:22-20.229.252.112:49148.service - OpenSSH per-connection server daemon (20.229.252.112:49148). Apr 24 00:18:52.193327 sshd[4079]: Accepted publickey for core from 20.229.252.112 port 49148 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:18:52.195470 sshd-session[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:18:52.201341 systemd-logind[1532]: New session 8 of user core. Apr 24 00:18:52.208419 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 24 00:18:52.592874 sshd[4084]: Connection closed by 20.229.252.112 port 49148 Apr 24 00:18:52.593127 sshd-session[4079]: pam_unix(sshd:session): session closed for user core Apr 24 00:18:52.601143 systemd[1]: sshd@9-172.234.204.89:22-20.229.252.112:49148.service: Deactivated successfully. Apr 24 00:18:52.604887 systemd[1]: session-8.scope: Deactivated successfully. Apr 24 00:18:52.606143 systemd-logind[1532]: Session 8 logged out. Waiting for processes to exit. Apr 24 00:18:52.607796 systemd-logind[1532]: Removed session 8. Apr 24 00:18:57.705868 systemd[1]: Started sshd@10-172.234.204.89:22-20.229.252.112:47370.service - OpenSSH per-connection server daemon (20.229.252.112:47370). Apr 24 00:18:58.251078 sshd[4097]: Accepted publickey for core from 20.229.252.112 port 47370 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:18:58.252558 sshd-session[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:18:58.259306 systemd-logind[1532]: New session 9 of user core. Apr 24 00:18:58.267426 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 24 00:18:58.613021 sshd[4100]: Connection closed by 20.229.252.112 port 47370 Apr 24 00:18:58.614562 sshd-session[4097]: pam_unix(sshd:session): session closed for user core Apr 24 00:18:58.619643 systemd-logind[1532]: Session 9 logged out. Waiting for processes to exit. Apr 24 00:18:58.619857 systemd[1]: sshd@10-172.234.204.89:22-20.229.252.112:47370.service: Deactivated successfully. Apr 24 00:18:58.622022 systemd[1]: session-9.scope: Deactivated successfully. Apr 24 00:18:58.623876 systemd-logind[1532]: Removed session 9. Apr 24 00:19:03.724209 systemd[1]: Started sshd@11-172.234.204.89:22-20.229.252.112:47374.service - OpenSSH per-connection server daemon (20.229.252.112:47374). Apr 24 00:19:04.278171 sshd[4113]: Accepted publickey for core from 20.229.252.112 port 47374 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:19:04.280254 sshd-session[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:19:04.289850 systemd-logind[1532]: New session 10 of user core. Apr 24 00:19:04.297423 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 24 00:19:04.654937 sshd[4116]: Connection closed by 20.229.252.112 port 47374 Apr 24 00:19:04.656504 sshd-session[4113]: pam_unix(sshd:session): session closed for user core Apr 24 00:19:04.664589 systemd-logind[1532]: Session 10 logged out. Waiting for processes to exit. Apr 24 00:19:04.665041 systemd[1]: sshd@11-172.234.204.89:22-20.229.252.112:47374.service: Deactivated successfully. Apr 24 00:19:04.668925 systemd[1]: session-10.scope: Deactivated successfully. Apr 24 00:19:04.671806 systemd-logind[1532]: Removed session 10. Apr 24 00:19:04.765599 systemd[1]: Started sshd@12-172.234.204.89:22-20.229.252.112:47388.service - OpenSSH per-connection server daemon (20.229.252.112:47388). Apr 24 00:19:05.287590 sshd[4128]: Accepted publickey for core from 20.229.252.112 port 47388 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:19:05.290154 sshd-session[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:19:05.298490 systemd-logind[1532]: New session 11 of user core. Apr 24 00:19:05.306463 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 24 00:19:05.675201 sshd[4131]: Connection closed by 20.229.252.112 port 47388 Apr 24 00:19:05.677135 sshd-session[4128]: pam_unix(sshd:session): session closed for user core Apr 24 00:19:05.681777 systemd-logind[1532]: Session 11 logged out. Waiting for processes to exit. Apr 24 00:19:05.682780 systemd[1]: sshd@12-172.234.204.89:22-20.229.252.112:47388.service: Deactivated successfully. Apr 24 00:19:05.685164 systemd[1]: session-11.scope: Deactivated successfully. Apr 24 00:19:05.687759 systemd-logind[1532]: Removed session 11. Apr 24 00:19:05.790052 systemd[1]: Started sshd@13-172.234.204.89:22-20.229.252.112:47404.service - OpenSSH per-connection server daemon (20.229.252.112:47404). Apr 24 00:19:06.348017 sshd[4141]: Accepted publickey for core from 20.229.252.112 port 47404 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:19:06.351209 sshd-session[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:19:06.359735 systemd-logind[1532]: New session 12 of user core. Apr 24 00:19:06.368699 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 24 00:19:06.787157 sshd[4144]: Connection closed by 20.229.252.112 port 47404 Apr 24 00:19:06.785542 sshd-session[4141]: pam_unix(sshd:session): session closed for user core Apr 24 00:19:06.792550 systemd[1]: sshd@13-172.234.204.89:22-20.229.252.112:47404.service: Deactivated successfully. Apr 24 00:19:06.797064 systemd[1]: session-12.scope: Deactivated successfully. Apr 24 00:19:06.802068 systemd-logind[1532]: Session 12 logged out. Waiting for processes to exit. Apr 24 00:19:06.804950 systemd-logind[1532]: Removed session 12. Apr 24 00:19:11.894756 systemd[1]: Started sshd@14-172.234.204.89:22-20.229.252.112:45194.service - OpenSSH per-connection server daemon (20.229.252.112:45194). Apr 24 00:19:12.450311 sshd[4156]: Accepted publickey for core from 20.229.252.112 port 45194 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:19:12.453463 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:19:12.459751 systemd-logind[1532]: New session 13 of user core. Apr 24 00:19:12.467454 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 24 00:19:12.815848 sshd[4159]: Connection closed by 20.229.252.112 port 45194 Apr 24 00:19:12.816480 sshd-session[4156]: pam_unix(sshd:session): session closed for user core Apr 24 00:19:12.821952 systemd[1]: sshd@14-172.234.204.89:22-20.229.252.112:45194.service: Deactivated successfully. Apr 24 00:19:12.824590 systemd[1]: session-13.scope: Deactivated successfully. Apr 24 00:19:12.825650 systemd-logind[1532]: Session 13 logged out. Waiting for processes to exit. Apr 24 00:19:12.827105 systemd-logind[1532]: Removed session 13. Apr 24 00:19:17.921698 systemd[1]: Started sshd@15-172.234.204.89:22-20.229.252.112:45326.service - OpenSSH per-connection server daemon (20.229.252.112:45326). Apr 24 00:19:18.443477 sshd[4170]: Accepted publickey for core from 20.229.252.112 port 45326 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:19:18.445011 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:19:18.451636 systemd-logind[1532]: New session 14 of user core. Apr 24 00:19:18.458547 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 24 00:19:18.791880 sshd[4173]: Connection closed by 20.229.252.112 port 45326 Apr 24 00:19:18.793456 sshd-session[4170]: pam_unix(sshd:session): session closed for user core Apr 24 00:19:18.798033 systemd[1]: sshd@15-172.234.204.89:22-20.229.252.112:45326.service: Deactivated successfully. Apr 24 00:19:18.801054 systemd[1]: session-14.scope: Deactivated successfully. Apr 24 00:19:18.802002 systemd-logind[1532]: Session 14 logged out. Waiting for processes to exit. Apr 24 00:19:18.804215 systemd-logind[1532]: Removed session 14. Apr 24 00:19:18.901612 systemd[1]: Started sshd@16-172.234.204.89:22-20.229.252.112:45340.service - OpenSSH per-connection server daemon (20.229.252.112:45340). Apr 24 00:19:19.449142 sshd[4185]: Accepted publickey for core from 20.229.252.112 port 45340 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:19:19.449749 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:19:19.454656 systemd-logind[1532]: New session 15 of user core. Apr 24 00:19:19.458402 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 24 00:19:19.905495 sshd[4188]: Connection closed by 20.229.252.112 port 45340 Apr 24 00:19:19.907742 sshd-session[4185]: pam_unix(sshd:session): session closed for user core Apr 24 00:19:19.913681 systemd-logind[1532]: Session 15 logged out. Waiting for processes to exit. Apr 24 00:19:19.914139 systemd[1]: sshd@16-172.234.204.89:22-20.229.252.112:45340.service: Deactivated successfully. Apr 24 00:19:19.916139 systemd[1]: session-15.scope: Deactivated successfully. Apr 24 00:19:19.917907 systemd-logind[1532]: Removed session 15. Apr 24 00:19:20.016489 systemd[1]: Started sshd@17-172.234.204.89:22-20.229.252.112:45342.service - OpenSSH per-connection server daemon (20.229.252.112:45342). Apr 24 00:19:20.540151 sshd[4198]: Accepted publickey for core from 20.229.252.112 port 45342 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:19:20.540873 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:19:20.547937 systemd-logind[1532]: New session 16 of user core. Apr 24 00:19:20.551435 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 24 00:19:21.392272 sshd[4201]: Connection closed by 20.229.252.112 port 45342 Apr 24 00:19:21.393492 sshd-session[4198]: pam_unix(sshd:session): session closed for user core Apr 24 00:19:21.398427 systemd-logind[1532]: Session 16 logged out. Waiting for processes to exit. Apr 24 00:19:21.399606 systemd[1]: sshd@17-172.234.204.89:22-20.229.252.112:45342.service: Deactivated successfully. Apr 24 00:19:21.404303 systemd[1]: session-16.scope: Deactivated successfully. Apr 24 00:19:21.407311 systemd-logind[1532]: Removed session 16. Apr 24 00:19:21.507503 systemd[1]: Started sshd@18-172.234.204.89:22-20.229.252.112:45352.service - OpenSSH per-connection server daemon (20.229.252.112:45352). Apr 24 00:19:22.054454 sshd[4215]: Accepted publickey for core from 20.229.252.112 port 45352 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:19:22.055955 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:19:22.061524 systemd-logind[1532]: New session 17 of user core. Apr 24 00:19:22.069416 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 24 00:19:22.526100 sshd[4220]: Connection closed by 20.229.252.112 port 45352 Apr 24 00:19:22.527646 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Apr 24 00:19:22.531878 systemd[1]: sshd@18-172.234.204.89:22-20.229.252.112:45352.service: Deactivated successfully. Apr 24 00:19:22.534490 systemd[1]: session-17.scope: Deactivated successfully. Apr 24 00:19:22.535560 systemd-logind[1532]: Session 17 logged out. Waiting for processes to exit. Apr 24 00:19:22.537085 systemd-logind[1532]: Removed session 17. Apr 24 00:19:22.636111 systemd[1]: Started sshd@19-172.234.204.89:22-20.229.252.112:45358.service - OpenSSH per-connection server daemon (20.229.252.112:45358). Apr 24 00:19:23.180092 sshd[4232]: Accepted publickey for core from 20.229.252.112 port 45358 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:19:23.180917 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:19:23.188344 systemd-logind[1532]: New session 18 of user core. Apr 24 00:19:23.196996 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 24 00:19:23.207611 kubelet[2742]: E0424 00:19:23.207578 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:19:23.547638 sshd[4235]: Connection closed by 20.229.252.112 port 45358 Apr 24 00:19:23.549533 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Apr 24 00:19:23.554190 systemd[1]: sshd@19-172.234.204.89:22-20.229.252.112:45358.service: Deactivated successfully. Apr 24 00:19:23.556630 systemd[1]: session-18.scope: Deactivated successfully. Apr 24 00:19:23.558004 systemd-logind[1532]: Session 18 logged out. Waiting for processes to exit. Apr 24 00:19:23.559602 systemd-logind[1532]: Removed session 18. Apr 24 00:19:25.207704 kubelet[2742]: E0424 00:19:25.207518 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:19:27.207386 kubelet[2742]: E0424 00:19:27.206649 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:19:28.659369 systemd[1]: Started sshd@20-172.234.204.89:22-20.229.252.112:49288.service - OpenSSH per-connection server daemon (20.229.252.112:49288). Apr 24 00:19:29.207320 kubelet[2742]: E0424 00:19:29.207170 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:19:29.210081 sshd[4249]: Accepted publickey for core from 20.229.252.112 port 49288 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:19:29.212291 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:19:29.217745 systemd-logind[1532]: New session 19 of user core. Apr 24 00:19:29.225423 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 24 00:19:29.583353 sshd[4252]: Connection closed by 20.229.252.112 port 49288 Apr 24 00:19:29.584087 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Apr 24 00:19:29.589966 systemd[1]: sshd@20-172.234.204.89:22-20.229.252.112:49288.service: Deactivated successfully. Apr 24 00:19:29.593425 systemd[1]: session-19.scope: Deactivated successfully. Apr 24 00:19:29.594596 systemd-logind[1532]: Session 19 logged out. Waiting for processes to exit. Apr 24 00:19:29.596512 systemd-logind[1532]: Removed session 19. Apr 24 00:19:30.207104 kubelet[2742]: E0424 00:19:30.207069 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:19:34.206584 kubelet[2742]: E0424 00:19:34.206547 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:19:34.693484 systemd[1]: Started sshd@21-172.234.204.89:22-20.229.252.112:49300.service - OpenSSH per-connection server daemon (20.229.252.112:49300). Apr 24 00:19:35.217085 sshd[4264]: Accepted publickey for core from 20.229.252.112 port 49300 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:19:35.219003 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:19:35.225004 systemd-logind[1532]: New session 20 of user core. Apr 24 00:19:35.232423 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 24 00:19:35.572913 sshd[4267]: Connection closed by 20.229.252.112 port 49300 Apr 24 00:19:35.573648 sshd-session[4264]: pam_unix(sshd:session): session closed for user core Apr 24 00:19:35.578311 systemd-logind[1532]: Session 20 logged out. Waiting for processes to exit. Apr 24 00:19:35.578981 systemd[1]: sshd@21-172.234.204.89:22-20.229.252.112:49300.service: Deactivated successfully. Apr 24 00:19:35.580847 systemd[1]: session-20.scope: Deactivated successfully. Apr 24 00:19:35.583746 systemd-logind[1532]: Removed session 20. Apr 24 00:19:35.683348 systemd[1]: Started sshd@22-172.234.204.89:22-20.229.252.112:49312.service - OpenSSH per-connection server daemon (20.229.252.112:49312). Apr 24 00:19:36.205334 sshd[4278]: Accepted publickey for core from 20.229.252.112 port 49312 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:19:36.206403 sshd-session[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:19:36.211803 systemd-logind[1532]: New session 21 of user core. Apr 24 00:19:36.221438 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 24 00:19:37.717075 containerd[1553]: time="2026-04-24T00:19:37.715622774Z" level=info msg="StopContainer for \"944c164208d21edce7963072c4786fea359971f317ec7d19e274837af9aab775\" with timeout 30 (s)" Apr 24 00:19:37.718935 containerd[1553]: time="2026-04-24T00:19:37.718388183Z" level=info msg="Stop container \"944c164208d21edce7963072c4786fea359971f317ec7d19e274837af9aab775\" with signal terminated" Apr 24 00:19:37.744723 containerd[1553]: time="2026-04-24T00:19:37.744656998Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 00:19:37.750614 systemd[1]: cri-containerd-944c164208d21edce7963072c4786fea359971f317ec7d19e274837af9aab775.scope: Deactivated successfully. Apr 24 00:19:37.754661 containerd[1553]: time="2026-04-24T00:19:37.754543477Z" level=info msg="received container exit event container_id:\"944c164208d21edce7963072c4786fea359971f317ec7d19e274837af9aab775\" id:\"944c164208d21edce7963072c4786fea359971f317ec7d19e274837af9aab775\" pid:3272 exited_at:{seconds:1776989977 nanos:754066356}" Apr 24 00:19:37.759857 containerd[1553]: time="2026-04-24T00:19:37.759446351Z" level=info msg="StopContainer for \"809e04db7d9c390a83061b1292a2f04a60b67f503f659c1545c10e30befc04d1\" with timeout 2 (s)" Apr 24 00:19:37.760809 containerd[1553]: time="2026-04-24T00:19:37.760764804Z" level=info msg="Stop container \"809e04db7d9c390a83061b1292a2f04a60b67f503f659c1545c10e30befc04d1\" with signal terminated" Apr 24 00:19:37.776321 systemd-networkd[1439]: lxc_health: Link DOWN Apr 24 00:19:37.776347 systemd-networkd[1439]: lxc_health: Lost carrier Apr 24 00:19:37.796626 systemd[1]: cri-containerd-809e04db7d9c390a83061b1292a2f04a60b67f503f659c1545c10e30befc04d1.scope: Deactivated successfully. Apr 24 00:19:37.796972 systemd[1]: cri-containerd-809e04db7d9c390a83061b1292a2f04a60b67f503f659c1545c10e30befc04d1.scope: Consumed 7.190s CPU time, 128.6M memory peak, 112K read from disk, 13.3M written to disk. Apr 24 00:19:37.803692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-944c164208d21edce7963072c4786fea359971f317ec7d19e274837af9aab775-rootfs.mount: Deactivated successfully. Apr 24 00:19:37.804511 containerd[1553]: time="2026-04-24T00:19:37.803925380Z" level=info msg="received container exit event container_id:\"809e04db7d9c390a83061b1292a2f04a60b67f503f659c1545c10e30befc04d1\" id:\"809e04db7d9c390a83061b1292a2f04a60b67f503f659c1545c10e30befc04d1\" pid:3382 exited_at:{seconds:1776989977 nanos:798768254}" Apr 24 00:19:37.816640 containerd[1553]: time="2026-04-24T00:19:37.816614285Z" level=info msg="StopContainer for \"944c164208d21edce7963072c4786fea359971f317ec7d19e274837af9aab775\" returns successfully" Apr 24 00:19:37.818133 containerd[1553]: time="2026-04-24T00:19:37.818081560Z" level=info msg="StopPodSandbox for \"1749da598afb3598633bb83356b9bdb20d815a7d1d9d51243b8ea70a6264d46c\"" Apr 24 00:19:37.818316 containerd[1553]: time="2026-04-24T00:19:37.818250131Z" level=info msg="Container to stop \"944c164208d21edce7963072c4786fea359971f317ec7d19e274837af9aab775\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 00:19:37.832946 systemd[1]: cri-containerd-1749da598afb3598633bb83356b9bdb20d815a7d1d9d51243b8ea70a6264d46c.scope: Deactivated successfully. Apr 24 00:19:37.836840 containerd[1553]: time="2026-04-24T00:19:37.836588553Z" level=info msg="received sandbox exit event container_id:\"1749da598afb3598633bb83356b9bdb20d815a7d1d9d51243b8ea70a6264d46c\" id:\"1749da598afb3598633bb83356b9bdb20d815a7d1d9d51243b8ea70a6264d46c\" exit_status:137 exited_at:{seconds:1776989977 nanos:834361777}" monitor_name=podsandbox Apr 24 00:19:37.849100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-809e04db7d9c390a83061b1292a2f04a60b67f503f659c1545c10e30befc04d1-rootfs.mount: Deactivated successfully. Apr 24 00:19:37.862317 containerd[1553]: time="2026-04-24T00:19:37.862123127Z" level=info msg="StopContainer for \"809e04db7d9c390a83061b1292a2f04a60b67f503f659c1545c10e30befc04d1\" returns successfully" Apr 24 00:19:37.863484 containerd[1553]: time="2026-04-24T00:19:37.863440081Z" level=info msg="StopPodSandbox for \"5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84\"" Apr 24 00:19:37.866313 containerd[1553]: time="2026-04-24T00:19:37.865938308Z" level=info msg="Container to stop \"7026fdafbd5b0dfca0008d65526be92bd6b02e0b41e77556a804d0906f69ec86\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 00:19:37.866313 containerd[1553]: time="2026-04-24T00:19:37.866218669Z" level=info msg="Container to stop \"b32b0773041f8a20ab3531ac1994553f72f1389be36919527cb56c3b3f2bf77b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 00:19:37.866982 containerd[1553]: time="2026-04-24T00:19:37.866238989Z" level=info msg="Container to stop \"a8191e96deca7e64df827c17bde17173c39a4dd252a29d81ce20f1bf6dc09b5a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 00:19:37.866982 containerd[1553]: time="2026-04-24T00:19:37.866748540Z" level=info msg="Container to stop \"13fba06fb545cb528f469b0775a1f73365e296693e47727e52798220932334e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 00:19:37.866982 containerd[1553]: time="2026-04-24T00:19:37.866766370Z" level=info msg="Container to stop \"809e04db7d9c390a83061b1292a2f04a60b67f503f659c1545c10e30befc04d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 00:19:37.879637 systemd[1]: cri-containerd-5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84.scope: Deactivated successfully. Apr 24 00:19:37.886413 containerd[1553]: time="2026-04-24T00:19:37.886256636Z" level=info msg="received sandbox exit event container_id:\"5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84\" id:\"5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84\" exit_status:137 exited_at:{seconds:1776989977 nanos:885730124}" monitor_name=podsandbox Apr 24 00:19:37.895687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1749da598afb3598633bb83356b9bdb20d815a7d1d9d51243b8ea70a6264d46c-rootfs.mount: Deactivated successfully. Apr 24 00:19:37.898459 containerd[1553]: time="2026-04-24T00:19:37.898353941Z" level=error msg="failed sending message on channel" error="write unix /run/containerd/containerd.sock.ttrpc->@: write: broken pipe" Apr 24 00:19:37.898946 containerd[1553]: time="2026-04-24T00:19:37.898774302Z" level=info msg="shim disconnected" id=1749da598afb3598633bb83356b9bdb20d815a7d1d9d51243b8ea70a6264d46c namespace=k8s.io Apr 24 00:19:37.898946 containerd[1553]: time="2026-04-24T00:19:37.898793762Z" level=warning msg="cleaning up after shim disconnected" id=1749da598afb3598633bb83356b9bdb20d815a7d1d9d51243b8ea70a6264d46c namespace=k8s.io Apr 24 00:19:37.898946 containerd[1553]: time="2026-04-24T00:19:37.898814332Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 00:19:37.916158 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84-rootfs.mount: Deactivated successfully. Apr 24 00:19:37.924518 containerd[1553]: time="2026-04-24T00:19:37.924351726Z" level=info msg="shim disconnected" id=5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84 namespace=k8s.io Apr 24 00:19:37.924518 containerd[1553]: time="2026-04-24T00:19:37.924376426Z" level=warning msg="cleaning up after shim disconnected" id=5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84 namespace=k8s.io Apr 24 00:19:37.924518 containerd[1553]: time="2026-04-24T00:19:37.924384056Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 00:19:37.933996 containerd[1553]: time="2026-04-24T00:19:37.933943484Z" level=info msg="received sandbox container exit event sandbox_id:\"1749da598afb3598633bb83356b9bdb20d815a7d1d9d51243b8ea70a6264d46c\" exit_status:137 exited_at:{seconds:1776989977 nanos:834361777}" monitor_name=criService Apr 24 00:19:37.934201 containerd[1553]: time="2026-04-24T00:19:37.934176755Z" level=info msg="TearDown network for sandbox \"1749da598afb3598633bb83356b9bdb20d815a7d1d9d51243b8ea70a6264d46c\" successfully" Apr 24 00:19:37.934299 containerd[1553]: time="2026-04-24T00:19:37.934263965Z" level=info msg="StopPodSandbox for \"1749da598afb3598633bb83356b9bdb20d815a7d1d9d51243b8ea70a6264d46c\" returns successfully" Apr 24 00:19:37.936418 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1749da598afb3598633bb83356b9bdb20d815a7d1d9d51243b8ea70a6264d46c-shm.mount: Deactivated successfully. Apr 24 00:19:37.959067 containerd[1553]: time="2026-04-24T00:19:37.958882015Z" level=info msg="received sandbox container exit event sandbox_id:\"5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84\" exit_status:137 exited_at:{seconds:1776989977 nanos:885730124}" monitor_name=criService Apr 24 00:19:37.959488 containerd[1553]: time="2026-04-24T00:19:37.959447417Z" level=info msg="TearDown network for sandbox \"5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84\" successfully" Apr 24 00:19:37.959488 containerd[1553]: time="2026-04-24T00:19:37.959476817Z" level=info msg="StopPodSandbox for \"5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84\" returns successfully" Apr 24 00:19:38.026860 kubelet[2742]: I0424 00:19:38.025972 2742 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-host-proc-sys-net\") pod \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " Apr 24 00:19:38.026860 kubelet[2742]: I0424 00:19:38.026120 2742 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5665695f-fb75-4206-9467-1b6af4c145a0-cilium-config-path\") pod \"5665695f-fb75-4206-9467-1b6af4c145a0\" (UID: \"5665695f-fb75-4206-9467-1b6af4c145a0\") " Apr 24 00:19:38.026860 kubelet[2742]: I0424 00:19:38.026141 2742 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-cilium-cgroup\") pod \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " Apr 24 00:19:38.026860 kubelet[2742]: I0424 00:19:38.026155 2742 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-xtables-lock\") pod \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " Apr 24 00:19:38.026860 kubelet[2742]: I0424 00:19:38.026167 2742 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-cni-path\") pod \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " Apr 24 00:19:38.026860 kubelet[2742]: I0424 00:19:38.026179 2742 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-hostproc\") pod \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " Apr 24 00:19:38.027418 kubelet[2742]: I0424 00:19:38.026193 2742 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-cilium-config-path\") pod \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " Apr 24 00:19:38.027418 kubelet[2742]: I0424 00:19:38.026204 2742 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-host-proc-sys-kernel\") pod \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " Apr 24 00:19:38.027418 kubelet[2742]: I0424 00:19:38.026219 2742 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-clustermesh-secrets\") pod \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " Apr 24 00:19:38.027418 kubelet[2742]: I0424 00:19:38.026232 2742 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-bpf-maps\") pod \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " Apr 24 00:19:38.027418 kubelet[2742]: I0424 00:19:38.026245 2742 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm8q7\" (UniqueName: \"kubernetes.io/projected/5665695f-fb75-4206-9467-1b6af4c145a0-kube-api-access-sm8q7\") pod \"5665695f-fb75-4206-9467-1b6af4c145a0\" (UID: \"5665695f-fb75-4206-9467-1b6af4c145a0\") " Apr 24 00:19:38.027418 kubelet[2742]: I0424 00:19:38.026259 2742 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v592q\" (UniqueName: \"kubernetes.io/projected/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-kube-api-access-v592q\") pod \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " Apr 24 00:19:38.027568 kubelet[2742]: I0424 00:19:38.026086 2742 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc" (UID: "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 00:19:38.027568 kubelet[2742]: I0424 00:19:38.027318 2742 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-hostproc" (OuterVolumeSpecName: "hostproc") pod "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc" (UID: "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 00:19:38.029300 kubelet[2742]: I0424 00:19:38.026271 2742 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-cilium-run\") pod \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " Apr 24 00:19:38.029300 kubelet[2742]: I0424 00:19:38.028328 2742 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-hubble-tls\") pod \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " Apr 24 00:19:38.029300 kubelet[2742]: I0424 00:19:38.028346 2742 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-etc-cni-netd\") pod \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " Apr 24 00:19:38.029300 kubelet[2742]: I0424 00:19:38.028357 2742 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-lib-modules\") pod \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\" (UID: \"e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc\") " Apr 24 00:19:38.029300 kubelet[2742]: I0424 00:19:38.028394 2742 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-hostproc\") on node \"172-234-204-89\" DevicePath \"\"" Apr 24 00:19:38.029300 kubelet[2742]: I0424 00:19:38.028405 2742 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-host-proc-sys-net\") on node \"172-234-204-89\" DevicePath \"\"" Apr 24 00:19:38.029462 kubelet[2742]: I0424 00:19:38.028425 2742 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc" (UID: "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 00:19:38.038254 kubelet[2742]: I0424 00:19:38.038213 2742 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5665695f-fb75-4206-9467-1b6af4c145a0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5665695f-fb75-4206-9467-1b6af4c145a0" (UID: "5665695f-fb75-4206-9467-1b6af4c145a0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 00:19:38.038330 kubelet[2742]: I0424 00:19:38.038263 2742 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc" (UID: "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 00:19:38.038874 kubelet[2742]: I0424 00:19:38.038840 2742 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc" (UID: "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 00:19:38.038930 kubelet[2742]: I0424 00:19:38.038874 2742 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc" (UID: "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 00:19:38.041313 kubelet[2742]: I0424 00:19:38.039388 2742 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc" (UID: "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 00:19:38.041313 kubelet[2742]: I0424 00:19:38.039414 2742 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-cni-path" (OuterVolumeSpecName: "cni-path") pod "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc" (UID: "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 00:19:38.048723 kubelet[2742]: I0424 00:19:38.048686 2742 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc" (UID: "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 00:19:38.048917 kubelet[2742]: I0424 00:19:38.048886 2742 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc" (UID: "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 00:19:38.050134 kubelet[2742]: I0424 00:19:38.049819 2742 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-kube-api-access-v592q" (OuterVolumeSpecName: "kube-api-access-v592q") pod "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc" (UID: "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc"). InnerVolumeSpecName "kube-api-access-v592q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 00:19:38.050134 kubelet[2742]: I0424 00:19:38.049880 2742 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc" (UID: "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 00:19:38.054674 kubelet[2742]: I0424 00:19:38.054463 2742 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc" (UID: "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 00:19:38.055722 kubelet[2742]: I0424 00:19:38.054787 2742 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc" (UID: "e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 00:19:38.056806 kubelet[2742]: I0424 00:19:38.056771 2742 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5665695f-fb75-4206-9467-1b6af4c145a0-kube-api-access-sm8q7" (OuterVolumeSpecName: "kube-api-access-sm8q7") pod "5665695f-fb75-4206-9467-1b6af4c145a0" (UID: "5665695f-fb75-4206-9467-1b6af4c145a0"). InnerVolumeSpecName "kube-api-access-sm8q7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 00:19:38.129179 kubelet[2742]: I0424 00:19:38.129137 2742 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-cilium-config-path\") on node \"172-234-204-89\" DevicePath \"\"" Apr 24 00:19:38.129179 kubelet[2742]: I0424 00:19:38.129171 2742 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-host-proc-sys-kernel\") on node \"172-234-204-89\" DevicePath \"\"" Apr 24 00:19:38.129179 kubelet[2742]: I0424 00:19:38.129181 2742 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-clustermesh-secrets\") on node \"172-234-204-89\" DevicePath \"\"" Apr 24 00:19:38.129356 kubelet[2742]: I0424 00:19:38.129190 2742 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-bpf-maps\") on node \"172-234-204-89\" DevicePath \"\"" Apr 24 00:19:38.129356 kubelet[2742]: I0424 00:19:38.129198 2742 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sm8q7\" (UniqueName: \"kubernetes.io/projected/5665695f-fb75-4206-9467-1b6af4c145a0-kube-api-access-sm8q7\") on node \"172-234-204-89\" DevicePath \"\"" Apr 24 00:19:38.129356 kubelet[2742]: I0424 00:19:38.129205 2742 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v592q\" (UniqueName: \"kubernetes.io/projected/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-kube-api-access-v592q\") on node \"172-234-204-89\" DevicePath \"\"" Apr 24 00:19:38.129356 kubelet[2742]: I0424 00:19:38.129213 2742 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-cilium-run\") on node \"172-234-204-89\" DevicePath \"\"" Apr 24 00:19:38.129356 kubelet[2742]: I0424 00:19:38.129223 2742 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-hubble-tls\") on node \"172-234-204-89\" DevicePath \"\"" Apr 24 00:19:38.129356 kubelet[2742]: I0424 00:19:38.129231 2742 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-etc-cni-netd\") on node \"172-234-204-89\" DevicePath \"\"" Apr 24 00:19:38.129356 kubelet[2742]: I0424 00:19:38.129238 2742 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-lib-modules\") on node \"172-234-204-89\" DevicePath \"\"" Apr 24 00:19:38.129356 kubelet[2742]: I0424 00:19:38.129245 2742 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5665695f-fb75-4206-9467-1b6af4c145a0-cilium-config-path\") on node \"172-234-204-89\" DevicePath \"\"" Apr 24 00:19:38.129815 kubelet[2742]: I0424 00:19:38.129253 2742 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-cilium-cgroup\") on node \"172-234-204-89\" DevicePath \"\"" Apr 24 00:19:38.129815 kubelet[2742]: I0424 00:19:38.129260 2742 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-xtables-lock\") on node \"172-234-204-89\" DevicePath \"\"" Apr 24 00:19:38.129815 kubelet[2742]: I0424 00:19:38.129268 2742 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc-cni-path\") on node \"172-234-204-89\" DevicePath \"\"" Apr 24 00:19:38.642927 kubelet[2742]: I0424 00:19:38.642882 2742 scope.go:117] "RemoveContainer" containerID="944c164208d21edce7963072c4786fea359971f317ec7d19e274837af9aab775" Apr 24 00:19:38.646254 containerd[1553]: time="2026-04-24T00:19:38.645329164Z" level=info msg="RemoveContainer for \"944c164208d21edce7963072c4786fea359971f317ec7d19e274837af9aab775\"" Apr 24 00:19:38.652307 systemd[1]: Removed slice kubepods-besteffort-pod5665695f_fb75_4206_9467_1b6af4c145a0.slice - libcontainer container kubepods-besteffort-pod5665695f_fb75_4206_9467_1b6af4c145a0.slice. Apr 24 00:19:38.653626 containerd[1553]: time="2026-04-24T00:19:38.653446417Z" level=info msg="RemoveContainer for \"944c164208d21edce7963072c4786fea359971f317ec7d19e274837af9aab775\" returns successfully" Apr 24 00:19:38.653863 containerd[1553]: time="2026-04-24T00:19:38.653744068Z" level=error msg="ContainerStatus for \"944c164208d21edce7963072c4786fea359971f317ec7d19e274837af9aab775\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"944c164208d21edce7963072c4786fea359971f317ec7d19e274837af9aab775\": not found" Apr 24 00:19:38.653928 kubelet[2742]: I0424 00:19:38.653596 2742 scope.go:117] "RemoveContainer" containerID="944c164208d21edce7963072c4786fea359971f317ec7d19e274837af9aab775" Apr 24 00:19:38.654350 kubelet[2742]: E0424 00:19:38.654317 2742 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"944c164208d21edce7963072c4786fea359971f317ec7d19e274837af9aab775\": not found" containerID="944c164208d21edce7963072c4786fea359971f317ec7d19e274837af9aab775" Apr 24 00:19:38.654419 kubelet[2742]: I0424 00:19:38.654348 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"944c164208d21edce7963072c4786fea359971f317ec7d19e274837af9aab775"} err="failed to get container status \"944c164208d21edce7963072c4786fea359971f317ec7d19e274837af9aab775\": rpc error: code = NotFound desc = an error occurred when try to find container \"944c164208d21edce7963072c4786fea359971f317ec7d19e274837af9aab775\": not found" Apr 24 00:19:38.654419 kubelet[2742]: I0424 00:19:38.654379 2742 scope.go:117] "RemoveContainer" containerID="809e04db7d9c390a83061b1292a2f04a60b67f503f659c1545c10e30befc04d1" Apr 24 00:19:38.658128 containerd[1553]: time="2026-04-24T00:19:38.658099391Z" level=info msg="RemoveContainer for \"809e04db7d9c390a83061b1292a2f04a60b67f503f659c1545c10e30befc04d1\"" Apr 24 00:19:38.662823 systemd[1]: Removed slice kubepods-burstable-pode11a6d9c_31e0_4d72_92a5_d5304f3f5bdc.slice - libcontainer container kubepods-burstable-pode11a6d9c_31e0_4d72_92a5_d5304f3f5bdc.slice. Apr 24 00:19:38.663061 systemd[1]: kubepods-burstable-pode11a6d9c_31e0_4d72_92a5_d5304f3f5bdc.slice: Consumed 7.318s CPU time, 129M memory peak, 112K read from disk, 13.3M written to disk. Apr 24 00:19:38.666384 containerd[1553]: time="2026-04-24T00:19:38.666318314Z" level=info msg="RemoveContainer for \"809e04db7d9c390a83061b1292a2f04a60b67f503f659c1545c10e30befc04d1\" returns successfully" Apr 24 00:19:38.666820 kubelet[2742]: I0424 00:19:38.666765 2742 scope.go:117] "RemoveContainer" containerID="13fba06fb545cb528f469b0775a1f73365e296693e47727e52798220932334e9" Apr 24 00:19:38.668974 containerd[1553]: time="2026-04-24T00:19:38.668929681Z" level=info msg="RemoveContainer for \"13fba06fb545cb528f469b0775a1f73365e296693e47727e52798220932334e9\"" Apr 24 00:19:38.675175 containerd[1553]: time="2026-04-24T00:19:38.675148710Z" level=info msg="RemoveContainer for \"13fba06fb545cb528f469b0775a1f73365e296693e47727e52798220932334e9\" returns successfully" Apr 24 00:19:38.675857 kubelet[2742]: I0424 00:19:38.675794 2742 scope.go:117] "RemoveContainer" containerID="a8191e96deca7e64df827c17bde17173c39a4dd252a29d81ce20f1bf6dc09b5a" Apr 24 00:19:38.680332 containerd[1553]: time="2026-04-24T00:19:38.679938663Z" level=info msg="RemoveContainer for \"a8191e96deca7e64df827c17bde17173c39a4dd252a29d81ce20f1bf6dc09b5a\"" Apr 24 00:19:38.685576 containerd[1553]: time="2026-04-24T00:19:38.685552309Z" level=info msg="RemoveContainer for \"a8191e96deca7e64df827c17bde17173c39a4dd252a29d81ce20f1bf6dc09b5a\" returns successfully" Apr 24 00:19:38.685935 kubelet[2742]: I0424 00:19:38.685864 2742 scope.go:117] "RemoveContainer" containerID="b32b0773041f8a20ab3531ac1994553f72f1389be36919527cb56c3b3f2bf77b" Apr 24 00:19:38.687585 containerd[1553]: time="2026-04-24T00:19:38.687563125Z" level=info msg="RemoveContainer for \"b32b0773041f8a20ab3531ac1994553f72f1389be36919527cb56c3b3f2bf77b\"" Apr 24 00:19:38.691216 containerd[1553]: time="2026-04-24T00:19:38.690601984Z" level=info msg="RemoveContainer for \"b32b0773041f8a20ab3531ac1994553f72f1389be36919527cb56c3b3f2bf77b\" returns successfully" Apr 24 00:19:38.691297 kubelet[2742]: I0424 00:19:38.690982 2742 scope.go:117] "RemoveContainer" containerID="7026fdafbd5b0dfca0008d65526be92bd6b02e0b41e77556a804d0906f69ec86" Apr 24 00:19:38.696599 containerd[1553]: time="2026-04-24T00:19:38.696569021Z" level=info msg="RemoveContainer for \"7026fdafbd5b0dfca0008d65526be92bd6b02e0b41e77556a804d0906f69ec86\"" Apr 24 00:19:38.702586 containerd[1553]: time="2026-04-24T00:19:38.702559968Z" level=info msg="RemoveContainer for \"7026fdafbd5b0dfca0008d65526be92bd6b02e0b41e77556a804d0906f69ec86\" returns successfully" Apr 24 00:19:38.704148 kubelet[2742]: I0424 00:19:38.704124 2742 scope.go:117] "RemoveContainer" containerID="809e04db7d9c390a83061b1292a2f04a60b67f503f659c1545c10e30befc04d1" Apr 24 00:19:38.704420 containerd[1553]: time="2026-04-24T00:19:38.704383053Z" level=error msg="ContainerStatus for \"809e04db7d9c390a83061b1292a2f04a60b67f503f659c1545c10e30befc04d1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"809e04db7d9c390a83061b1292a2f04a60b67f503f659c1545c10e30befc04d1\": not found" Apr 24 00:19:38.704613 kubelet[2742]: E0424 00:19:38.704596 2742 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"809e04db7d9c390a83061b1292a2f04a60b67f503f659c1545c10e30befc04d1\": not found" containerID="809e04db7d9c390a83061b1292a2f04a60b67f503f659c1545c10e30befc04d1" Apr 24 00:19:38.705021 kubelet[2742]: I0424 00:19:38.704916 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"809e04db7d9c390a83061b1292a2f04a60b67f503f659c1545c10e30befc04d1"} err="failed to get container status \"809e04db7d9c390a83061b1292a2f04a60b67f503f659c1545c10e30befc04d1\": rpc error: code = NotFound desc = an error occurred when try to find container \"809e04db7d9c390a83061b1292a2f04a60b67f503f659c1545c10e30befc04d1\": not found" Apr 24 00:19:38.705021 kubelet[2742]: I0424 00:19:38.704939 2742 scope.go:117] "RemoveContainer" containerID="13fba06fb545cb528f469b0775a1f73365e296693e47727e52798220932334e9" Apr 24 00:19:38.705139 containerd[1553]: time="2026-04-24T00:19:38.705113366Z" level=error msg="ContainerStatus for \"13fba06fb545cb528f469b0775a1f73365e296693e47727e52798220932334e9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"13fba06fb545cb528f469b0775a1f73365e296693e47727e52798220932334e9\": not found" Apr 24 00:19:38.705211 kubelet[2742]: E0424 00:19:38.705196 2742 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"13fba06fb545cb528f469b0775a1f73365e296693e47727e52798220932334e9\": not found" containerID="13fba06fb545cb528f469b0775a1f73365e296693e47727e52798220932334e9" Apr 24 00:19:38.705238 kubelet[2742]: I0424 00:19:38.705214 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"13fba06fb545cb528f469b0775a1f73365e296693e47727e52798220932334e9"} err="failed to get container status \"13fba06fb545cb528f469b0775a1f73365e296693e47727e52798220932334e9\": rpc error: code = NotFound desc = an error occurred when try to find container \"13fba06fb545cb528f469b0775a1f73365e296693e47727e52798220932334e9\": not found" Apr 24 00:19:38.705238 kubelet[2742]: I0424 00:19:38.705226 2742 scope.go:117] "RemoveContainer" containerID="a8191e96deca7e64df827c17bde17173c39a4dd252a29d81ce20f1bf6dc09b5a" Apr 24 00:19:38.705381 containerd[1553]: time="2026-04-24T00:19:38.705356226Z" level=error msg="ContainerStatus for \"a8191e96deca7e64df827c17bde17173c39a4dd252a29d81ce20f1bf6dc09b5a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a8191e96deca7e64df827c17bde17173c39a4dd252a29d81ce20f1bf6dc09b5a\": not found" Apr 24 00:19:38.705486 kubelet[2742]: E0424 00:19:38.705468 2742 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a8191e96deca7e64df827c17bde17173c39a4dd252a29d81ce20f1bf6dc09b5a\": not found" containerID="a8191e96deca7e64df827c17bde17173c39a4dd252a29d81ce20f1bf6dc09b5a" Apr 24 00:19:38.705568 kubelet[2742]: I0424 00:19:38.705551 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a8191e96deca7e64df827c17bde17173c39a4dd252a29d81ce20f1bf6dc09b5a"} err="failed to get container status \"a8191e96deca7e64df827c17bde17173c39a4dd252a29d81ce20f1bf6dc09b5a\": rpc error: code = NotFound desc = an error occurred when try to find container \"a8191e96deca7e64df827c17bde17173c39a4dd252a29d81ce20f1bf6dc09b5a\": not found" Apr 24 00:19:38.705568 kubelet[2742]: I0424 00:19:38.705567 2742 scope.go:117] "RemoveContainer" containerID="b32b0773041f8a20ab3531ac1994553f72f1389be36919527cb56c3b3f2bf77b" Apr 24 00:19:38.705909 containerd[1553]: time="2026-04-24T00:19:38.705887677Z" level=error msg="ContainerStatus for \"b32b0773041f8a20ab3531ac1994553f72f1389be36919527cb56c3b3f2bf77b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b32b0773041f8a20ab3531ac1994553f72f1389be36919527cb56c3b3f2bf77b\": not found" Apr 24 00:19:38.706079 kubelet[2742]: E0424 00:19:38.706062 2742 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b32b0773041f8a20ab3531ac1994553f72f1389be36919527cb56c3b3f2bf77b\": not found" containerID="b32b0773041f8a20ab3531ac1994553f72f1389be36919527cb56c3b3f2bf77b" Apr 24 00:19:38.706131 kubelet[2742]: I0424 00:19:38.706082 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b32b0773041f8a20ab3531ac1994553f72f1389be36919527cb56c3b3f2bf77b"} err="failed to get container status \"b32b0773041f8a20ab3531ac1994553f72f1389be36919527cb56c3b3f2bf77b\": rpc error: code = NotFound desc = an error occurred when try to find container \"b32b0773041f8a20ab3531ac1994553f72f1389be36919527cb56c3b3f2bf77b\": not found" Apr 24 00:19:38.706131 kubelet[2742]: I0424 00:19:38.706118 2742 scope.go:117] "RemoveContainer" containerID="7026fdafbd5b0dfca0008d65526be92bd6b02e0b41e77556a804d0906f69ec86" Apr 24 00:19:38.706324 containerd[1553]: time="2026-04-24T00:19:38.706266069Z" level=error msg="ContainerStatus for \"7026fdafbd5b0dfca0008d65526be92bd6b02e0b41e77556a804d0906f69ec86\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7026fdafbd5b0dfca0008d65526be92bd6b02e0b41e77556a804d0906f69ec86\": not found" Apr 24 00:19:38.706694 kubelet[2742]: E0424 00:19:38.706459 2742 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7026fdafbd5b0dfca0008d65526be92bd6b02e0b41e77556a804d0906f69ec86\": not found" containerID="7026fdafbd5b0dfca0008d65526be92bd6b02e0b41e77556a804d0906f69ec86" Apr 24 00:19:38.706694 kubelet[2742]: I0424 00:19:38.706679 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7026fdafbd5b0dfca0008d65526be92bd6b02e0b41e77556a804d0906f69ec86"} err="failed to get container status \"7026fdafbd5b0dfca0008d65526be92bd6b02e0b41e77556a804d0906f69ec86\": rpc error: code = NotFound desc = an error occurred when try to find container \"7026fdafbd5b0dfca0008d65526be92bd6b02e0b41e77556a804d0906f69ec86\": not found" Apr 24 00:19:38.801185 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84-shm.mount: Deactivated successfully. Apr 24 00:19:38.801659 systemd[1]: var-lib-kubelet-pods-5665695f\x2dfb75\x2d4206\x2d9467\x2d1b6af4c145a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsm8q7.mount: Deactivated successfully. Apr 24 00:19:38.801934 systemd[1]: var-lib-kubelet-pods-e11a6d9c\x2d31e0\x2d4d72\x2d92a5\x2dd5304f3f5bdc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv592q.mount: Deactivated successfully. Apr 24 00:19:38.802012 systemd[1]: var-lib-kubelet-pods-e11a6d9c\x2d31e0\x2d4d72\x2d92a5\x2dd5304f3f5bdc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 24 00:19:38.802089 systemd[1]: var-lib-kubelet-pods-e11a6d9c\x2d31e0\x2d4d72\x2d92a5\x2dd5304f3f5bdc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 24 00:19:39.208158 kubelet[2742]: E0424 00:19:39.208109 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:19:39.211296 kubelet[2742]: I0424 00:19:39.210783 2742 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5665695f-fb75-4206-9467-1b6af4c145a0" path="/var/lib/kubelet/pods/5665695f-fb75-4206-9467-1b6af4c145a0/volumes" Apr 24 00:19:39.211726 kubelet[2742]: I0424 00:19:39.211703 2742 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc" path="/var/lib/kubelet/pods/e11a6d9c-31e0-4d72-92a5-d5304f3f5bdc/volumes" Apr 24 00:19:39.737903 sshd[4281]: Connection closed by 20.229.252.112 port 49312 Apr 24 00:19:39.739606 sshd-session[4278]: pam_unix(sshd:session): session closed for user core Apr 24 00:19:39.744070 systemd-logind[1532]: Session 21 logged out. Waiting for processes to exit. Apr 24 00:19:39.744900 systemd[1]: sshd@22-172.234.204.89:22-20.229.252.112:49312.service: Deactivated successfully. Apr 24 00:19:39.747677 systemd[1]: session-21.scope: Deactivated successfully. Apr 24 00:19:39.749108 systemd-logind[1532]: Removed session 21. Apr 24 00:19:39.846842 systemd[1]: Started sshd@23-172.234.204.89:22-20.229.252.112:46112.service - OpenSSH per-connection server daemon (20.229.252.112:46112). Apr 24 00:19:40.311469 kubelet[2742]: E0424 00:19:40.311417 2742 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 24 00:19:40.369215 sshd[4432]: Accepted publickey for core from 20.229.252.112 port 46112 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:19:40.371012 sshd-session[4432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:19:40.376220 systemd-logind[1532]: New session 22 of user core. Apr 24 00:19:40.387476 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 24 00:19:40.946891 systemd[1]: Created slice kubepods-burstable-pod06a720f5_bac0_4174_b2ff_bef09893a355.slice - libcontainer container kubepods-burstable-pod06a720f5_bac0_4174_b2ff_bef09893a355.slice. Apr 24 00:19:41.025722 sshd[4435]: Connection closed by 20.229.252.112 port 46112 Apr 24 00:19:41.026521 sshd-session[4432]: pam_unix(sshd:session): session closed for user core Apr 24 00:19:41.031987 systemd[1]: sshd@23-172.234.204.89:22-20.229.252.112:46112.service: Deactivated successfully. Apr 24 00:19:41.034945 systemd[1]: session-22.scope: Deactivated successfully. Apr 24 00:19:41.036006 systemd-logind[1532]: Session 22 logged out. Waiting for processes to exit. Apr 24 00:19:41.037994 systemd-logind[1532]: Removed session 22. Apr 24 00:19:41.046141 kubelet[2742]: I0424 00:19:41.046103 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/06a720f5-bac0-4174-b2ff-bef09893a355-cilium-run\") pod \"cilium-n87gj\" (UID: \"06a720f5-bac0-4174-b2ff-bef09893a355\") " pod="kube-system/cilium-n87gj" Apr 24 00:19:41.046141 kubelet[2742]: I0424 00:19:41.046139 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06a720f5-bac0-4174-b2ff-bef09893a355-etc-cni-netd\") pod \"cilium-n87gj\" (UID: \"06a720f5-bac0-4174-b2ff-bef09893a355\") " pod="kube-system/cilium-n87gj" Apr 24 00:19:41.046256 kubelet[2742]: I0424 00:19:41.046157 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/06a720f5-bac0-4174-b2ff-bef09893a355-hostproc\") pod \"cilium-n87gj\" (UID: \"06a720f5-bac0-4174-b2ff-bef09893a355\") " pod="kube-system/cilium-n87gj" Apr 24 00:19:41.046256 kubelet[2742]: I0424 00:19:41.046171 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/06a720f5-bac0-4174-b2ff-bef09893a355-cni-path\") pod \"cilium-n87gj\" (UID: \"06a720f5-bac0-4174-b2ff-bef09893a355\") " pod="kube-system/cilium-n87gj" Apr 24 00:19:41.046256 kubelet[2742]: I0424 00:19:41.046185 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/06a720f5-bac0-4174-b2ff-bef09893a355-cilium-ipsec-secrets\") pod \"cilium-n87gj\" (UID: \"06a720f5-bac0-4174-b2ff-bef09893a355\") " pod="kube-system/cilium-n87gj" Apr 24 00:19:41.046256 kubelet[2742]: I0424 00:19:41.046198 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/06a720f5-bac0-4174-b2ff-bef09893a355-host-proc-sys-net\") pod \"cilium-n87gj\" (UID: \"06a720f5-bac0-4174-b2ff-bef09893a355\") " pod="kube-system/cilium-n87gj" Apr 24 00:19:41.046256 kubelet[2742]: I0424 00:19:41.046210 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/06a720f5-bac0-4174-b2ff-bef09893a355-host-proc-sys-kernel\") pod \"cilium-n87gj\" (UID: \"06a720f5-bac0-4174-b2ff-bef09893a355\") " pod="kube-system/cilium-n87gj" Apr 24 00:19:41.046256 kubelet[2742]: I0424 00:19:41.046225 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06a720f5-bac0-4174-b2ff-bef09893a355-xtables-lock\") pod \"cilium-n87gj\" (UID: \"06a720f5-bac0-4174-b2ff-bef09893a355\") " pod="kube-system/cilium-n87gj" Apr 24 00:19:41.046446 kubelet[2742]: I0424 00:19:41.046237 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/06a720f5-bac0-4174-b2ff-bef09893a355-cilium-config-path\") pod \"cilium-n87gj\" (UID: \"06a720f5-bac0-4174-b2ff-bef09893a355\") " pod="kube-system/cilium-n87gj" Apr 24 00:19:41.046446 kubelet[2742]: I0424 00:19:41.046251 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/06a720f5-bac0-4174-b2ff-bef09893a355-clustermesh-secrets\") pod \"cilium-n87gj\" (UID: \"06a720f5-bac0-4174-b2ff-bef09893a355\") " pod="kube-system/cilium-n87gj" Apr 24 00:19:41.046446 kubelet[2742]: I0424 00:19:41.046264 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82h8w\" (UniqueName: \"kubernetes.io/projected/06a720f5-bac0-4174-b2ff-bef09893a355-kube-api-access-82h8w\") pod \"cilium-n87gj\" (UID: \"06a720f5-bac0-4174-b2ff-bef09893a355\") " pod="kube-system/cilium-n87gj" Apr 24 00:19:41.046446 kubelet[2742]: I0424 00:19:41.046292 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/06a720f5-bac0-4174-b2ff-bef09893a355-bpf-maps\") pod \"cilium-n87gj\" (UID: \"06a720f5-bac0-4174-b2ff-bef09893a355\") " pod="kube-system/cilium-n87gj" Apr 24 00:19:41.046446 kubelet[2742]: I0424 00:19:41.046304 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/06a720f5-bac0-4174-b2ff-bef09893a355-cilium-cgroup\") pod \"cilium-n87gj\" (UID: \"06a720f5-bac0-4174-b2ff-bef09893a355\") " pod="kube-system/cilium-n87gj" Apr 24 00:19:41.046446 kubelet[2742]: I0424 00:19:41.046315 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06a720f5-bac0-4174-b2ff-bef09893a355-lib-modules\") pod \"cilium-n87gj\" (UID: \"06a720f5-bac0-4174-b2ff-bef09893a355\") " pod="kube-system/cilium-n87gj" Apr 24 00:19:41.046587 kubelet[2742]: I0424 00:19:41.046327 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/06a720f5-bac0-4174-b2ff-bef09893a355-hubble-tls\") pod \"cilium-n87gj\" (UID: \"06a720f5-bac0-4174-b2ff-bef09893a355\") " pod="kube-system/cilium-n87gj" Apr 24 00:19:41.139757 systemd[1]: Started sshd@24-172.234.204.89:22-20.229.252.112:46120.service - OpenSSH per-connection server daemon (20.229.252.112:46120). Apr 24 00:19:41.251962 kubelet[2742]: E0424 00:19:41.251838 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:19:41.254715 containerd[1553]: time="2026-04-24T00:19:41.253070512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n87gj,Uid:06a720f5-bac0-4174-b2ff-bef09893a355,Namespace:kube-system,Attempt:0,}" Apr 24 00:19:41.271697 containerd[1553]: time="2026-04-24T00:19:41.271631414Z" level=info msg="connecting to shim 26ea9e760ce60e3623e116be593db82c8a9297995ad1cf6101d0529dbeaf33b8" address="unix:///run/containerd/s/92bb7c016007c06a363a39e5c8177268894f80e2213a8dc251f43f26823e5206" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:19:41.298462 systemd[1]: Started cri-containerd-26ea9e760ce60e3623e116be593db82c8a9297995ad1cf6101d0529dbeaf33b8.scope - libcontainer container 26ea9e760ce60e3623e116be593db82c8a9297995ad1cf6101d0529dbeaf33b8. Apr 24 00:19:41.332053 containerd[1553]: time="2026-04-24T00:19:41.332017386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n87gj,Uid:06a720f5-bac0-4174-b2ff-bef09893a355,Namespace:kube-system,Attempt:0,} returns sandbox id \"26ea9e760ce60e3623e116be593db82c8a9297995ad1cf6101d0529dbeaf33b8\"" Apr 24 00:19:41.333045 kubelet[2742]: E0424 00:19:41.333025 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:19:41.340391 containerd[1553]: time="2026-04-24T00:19:41.340260459Z" level=info msg="CreateContainer within sandbox \"26ea9e760ce60e3623e116be593db82c8a9297995ad1cf6101d0529dbeaf33b8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 24 00:19:41.346037 containerd[1553]: time="2026-04-24T00:19:41.345929315Z" level=info msg="Container e9cc463b7e11d5182bfee21cd098fa544f799be15e81c4e1f04401c51df49f89: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:19:41.349653 containerd[1553]: time="2026-04-24T00:19:41.349630785Z" level=info msg="CreateContainer within sandbox \"26ea9e760ce60e3623e116be593db82c8a9297995ad1cf6101d0529dbeaf33b8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e9cc463b7e11d5182bfee21cd098fa544f799be15e81c4e1f04401c51df49f89\"" Apr 24 00:19:41.351181 containerd[1553]: time="2026-04-24T00:19:41.351144660Z" level=info msg="StartContainer for \"e9cc463b7e11d5182bfee21cd098fa544f799be15e81c4e1f04401c51df49f89\"" Apr 24 00:19:41.354319 containerd[1553]: time="2026-04-24T00:19:41.354267908Z" level=info msg="connecting to shim e9cc463b7e11d5182bfee21cd098fa544f799be15e81c4e1f04401c51df49f89" address="unix:///run/containerd/s/92bb7c016007c06a363a39e5c8177268894f80e2213a8dc251f43f26823e5206" protocol=ttrpc version=3 Apr 24 00:19:41.372428 systemd[1]: Started cri-containerd-e9cc463b7e11d5182bfee21cd098fa544f799be15e81c4e1f04401c51df49f89.scope - libcontainer container e9cc463b7e11d5182bfee21cd098fa544f799be15e81c4e1f04401c51df49f89. Apr 24 00:19:41.408134 containerd[1553]: time="2026-04-24T00:19:41.408053211Z" level=info msg="StartContainer for \"e9cc463b7e11d5182bfee21cd098fa544f799be15e81c4e1f04401c51df49f89\" returns successfully" Apr 24 00:19:41.419154 systemd[1]: cri-containerd-e9cc463b7e11d5182bfee21cd098fa544f799be15e81c4e1f04401c51df49f89.scope: Deactivated successfully. Apr 24 00:19:41.423069 containerd[1553]: time="2026-04-24T00:19:41.422974793Z" level=info msg="received container exit event container_id:\"e9cc463b7e11d5182bfee21cd098fa544f799be15e81c4e1f04401c51df49f89\" id:\"e9cc463b7e11d5182bfee21cd098fa544f799be15e81c4e1f04401c51df49f89\" pid:4512 exited_at:{seconds:1776989981 nanos:422760963}" Apr 24 00:19:41.666056 kubelet[2742]: E0424 00:19:41.665257 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:19:41.673303 containerd[1553]: time="2026-04-24T00:19:41.671092105Z" level=info msg="CreateContainer within sandbox \"26ea9e760ce60e3623e116be593db82c8a9297995ad1cf6101d0529dbeaf33b8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 24 00:19:41.682451 containerd[1553]: time="2026-04-24T00:19:41.682420987Z" level=info msg="Container 3c54b750a75b0a8d0688c60d81c8cce0f60969946fc0a9c8614518907347f51d: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:19:41.690987 containerd[1553]: time="2026-04-24T00:19:41.690936071Z" level=info msg="CreateContainer within sandbox \"26ea9e760ce60e3623e116be593db82c8a9297995ad1cf6101d0529dbeaf33b8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3c54b750a75b0a8d0688c60d81c8cce0f60969946fc0a9c8614518907347f51d\"" Apr 24 00:19:41.691974 containerd[1553]: time="2026-04-24T00:19:41.691933874Z" level=info msg="StartContainer for \"3c54b750a75b0a8d0688c60d81c8cce0f60969946fc0a9c8614518907347f51d\"" Apr 24 00:19:41.694297 containerd[1553]: time="2026-04-24T00:19:41.694246650Z" level=info msg="connecting to shim 3c54b750a75b0a8d0688c60d81c8cce0f60969946fc0a9c8614518907347f51d" address="unix:///run/containerd/s/92bb7c016007c06a363a39e5c8177268894f80e2213a8dc251f43f26823e5206" protocol=ttrpc version=3 Apr 24 00:19:41.716443 systemd[1]: Started cri-containerd-3c54b750a75b0a8d0688c60d81c8cce0f60969946fc0a9c8614518907347f51d.scope - libcontainer container 3c54b750a75b0a8d0688c60d81c8cce0f60969946fc0a9c8614518907347f51d. Apr 24 00:19:41.722014 sshd[4446]: Accepted publickey for core from 20.229.252.112 port 46120 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:19:41.724885 sshd-session[4446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:19:41.733963 systemd-logind[1532]: New session 23 of user core. Apr 24 00:19:41.738420 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 24 00:19:41.770945 containerd[1553]: time="2026-04-24T00:19:41.770893478Z" level=info msg="StartContainer for \"3c54b750a75b0a8d0688c60d81c8cce0f60969946fc0a9c8614518907347f51d\" returns successfully" Apr 24 00:19:41.780614 systemd[1]: cri-containerd-3c54b750a75b0a8d0688c60d81c8cce0f60969946fc0a9c8614518907347f51d.scope: Deactivated successfully. Apr 24 00:19:41.783145 containerd[1553]: time="2026-04-24T00:19:41.783103632Z" level=info msg="received container exit event container_id:\"3c54b750a75b0a8d0688c60d81c8cce0f60969946fc0a9c8614518907347f51d\" id:\"3c54b750a75b0a8d0688c60d81c8cce0f60969946fc0a9c8614518907347f51d\" pid:4560 exited_at:{seconds:1776989981 nanos:782882052}" Apr 24 00:19:42.029455 sshd[4566]: Connection closed by 20.229.252.112 port 46120 Apr 24 00:19:42.030187 sshd-session[4446]: pam_unix(sshd:session): session closed for user core Apr 24 00:19:42.035555 systemd-logind[1532]: Session 23 logged out. Waiting for processes to exit. Apr 24 00:19:42.035632 systemd[1]: sshd@24-172.234.204.89:22-20.229.252.112:46120.service: Deactivated successfully. Apr 24 00:19:42.037766 systemd[1]: session-23.scope: Deactivated successfully. Apr 24 00:19:42.040126 systemd-logind[1532]: Removed session 23. Apr 24 00:19:42.138574 systemd[1]: Started sshd@25-172.234.204.89:22-20.229.252.112:46132.service - OpenSSH per-connection server daemon (20.229.252.112:46132). Apr 24 00:19:42.659709 sshd[4599]: Accepted publickey for core from 20.229.252.112 port 46132 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:19:42.661918 sshd-session[4599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:19:42.668453 systemd-logind[1532]: New session 24 of user core. Apr 24 00:19:42.672903 kubelet[2742]: E0424 00:19:42.672861 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:19:42.674560 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 24 00:19:42.684982 containerd[1553]: time="2026-04-24T00:19:42.684658324Z" level=info msg="CreateContainer within sandbox \"26ea9e760ce60e3623e116be593db82c8a9297995ad1cf6101d0529dbeaf33b8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 24 00:19:42.697308 containerd[1553]: time="2026-04-24T00:19:42.697207250Z" level=info msg="Container b3a2676a0d834c0a679e77574ef8a7df491b971d96ab6c2a1d6d34182c49e6c7: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:19:42.709175 containerd[1553]: time="2026-04-24T00:19:42.706883297Z" level=info msg="CreateContainer within sandbox \"26ea9e760ce60e3623e116be593db82c8a9297995ad1cf6101d0529dbeaf33b8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b3a2676a0d834c0a679e77574ef8a7df491b971d96ab6c2a1d6d34182c49e6c7\"" Apr 24 00:19:42.710595 containerd[1553]: time="2026-04-24T00:19:42.710458657Z" level=info msg="StartContainer for \"b3a2676a0d834c0a679e77574ef8a7df491b971d96ab6c2a1d6d34182c49e6c7\"" Apr 24 00:19:42.711920 containerd[1553]: time="2026-04-24T00:19:42.711899221Z" level=info msg="connecting to shim b3a2676a0d834c0a679e77574ef8a7df491b971d96ab6c2a1d6d34182c49e6c7" address="unix:///run/containerd/s/92bb7c016007c06a363a39e5c8177268894f80e2213a8dc251f43f26823e5206" protocol=ttrpc version=3 Apr 24 00:19:42.752821 systemd[1]: Started cri-containerd-b3a2676a0d834c0a679e77574ef8a7df491b971d96ab6c2a1d6d34182c49e6c7.scope - libcontainer container b3a2676a0d834c0a679e77574ef8a7df491b971d96ab6c2a1d6d34182c49e6c7. Apr 24 00:19:42.863417 containerd[1553]: time="2026-04-24T00:19:42.863381828Z" level=info msg="StartContainer for \"b3a2676a0d834c0a679e77574ef8a7df491b971d96ab6c2a1d6d34182c49e6c7\" returns successfully" Apr 24 00:19:42.870832 systemd[1]: cri-containerd-b3a2676a0d834c0a679e77574ef8a7df491b971d96ab6c2a1d6d34182c49e6c7.scope: Deactivated successfully. Apr 24 00:19:42.876100 containerd[1553]: time="2026-04-24T00:19:42.875219552Z" level=info msg="received container exit event container_id:\"b3a2676a0d834c0a679e77574ef8a7df491b971d96ab6c2a1d6d34182c49e6c7\" id:\"b3a2676a0d834c0a679e77574ef8a7df491b971d96ab6c2a1d6d34182c49e6c7\" pid:4616 exited_at:{seconds:1776989982 nanos:874716580}" Apr 24 00:19:42.921900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3a2676a0d834c0a679e77574ef8a7df491b971d96ab6c2a1d6d34182c49e6c7-rootfs.mount: Deactivated successfully. Apr 24 00:19:43.679188 kubelet[2742]: E0424 00:19:43.679154 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:19:43.688297 containerd[1553]: time="2026-04-24T00:19:43.686591029Z" level=info msg="CreateContainer within sandbox \"26ea9e760ce60e3623e116be593db82c8a9297995ad1cf6101d0529dbeaf33b8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 24 00:19:43.698380 containerd[1553]: time="2026-04-24T00:19:43.698344892Z" level=info msg="Container 81fa2ad06a03605356cef8af7b625651639850de38f68d22f1627f53b931e313: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:19:43.708977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3901720406.mount: Deactivated successfully. Apr 24 00:19:43.714878 containerd[1553]: time="2026-04-24T00:19:43.714842208Z" level=info msg="CreateContainer within sandbox \"26ea9e760ce60e3623e116be593db82c8a9297995ad1cf6101d0529dbeaf33b8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"81fa2ad06a03605356cef8af7b625651639850de38f68d22f1627f53b931e313\"" Apr 24 00:19:43.716305 containerd[1553]: time="2026-04-24T00:19:43.716168162Z" level=info msg="StartContainer for \"81fa2ad06a03605356cef8af7b625651639850de38f68d22f1627f53b931e313\"" Apr 24 00:19:43.717830 containerd[1553]: time="2026-04-24T00:19:43.717701286Z" level=info msg="connecting to shim 81fa2ad06a03605356cef8af7b625651639850de38f68d22f1627f53b931e313" address="unix:///run/containerd/s/92bb7c016007c06a363a39e5c8177268894f80e2213a8dc251f43f26823e5206" protocol=ttrpc version=3 Apr 24 00:19:43.743433 systemd[1]: Started cri-containerd-81fa2ad06a03605356cef8af7b625651639850de38f68d22f1627f53b931e313.scope - libcontainer container 81fa2ad06a03605356cef8af7b625651639850de38f68d22f1627f53b931e313. Apr 24 00:19:43.777083 systemd[1]: cri-containerd-81fa2ad06a03605356cef8af7b625651639850de38f68d22f1627f53b931e313.scope: Deactivated successfully. Apr 24 00:19:43.778467 containerd[1553]: time="2026-04-24T00:19:43.778335117Z" level=info msg="received container exit event container_id:\"81fa2ad06a03605356cef8af7b625651639850de38f68d22f1627f53b931e313\" id:\"81fa2ad06a03605356cef8af7b625651639850de38f68d22f1627f53b931e313\" pid:4660 exited_at:{seconds:1776989983 nanos:778101885}" Apr 24 00:19:43.779741 containerd[1553]: time="2026-04-24T00:19:43.779719580Z" level=info msg="StartContainer for \"81fa2ad06a03605356cef8af7b625651639850de38f68d22f1627f53b931e313\" returns successfully" Apr 24 00:19:43.803195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81fa2ad06a03605356cef8af7b625651639850de38f68d22f1627f53b931e313-rootfs.mount: Deactivated successfully. Apr 24 00:19:44.687023 kubelet[2742]: E0424 00:19:44.686986 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:19:44.695307 containerd[1553]: time="2026-04-24T00:19:44.693258485Z" level=info msg="CreateContainer within sandbox \"26ea9e760ce60e3623e116be593db82c8a9297995ad1cf6101d0529dbeaf33b8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 24 00:19:44.714240 containerd[1553]: time="2026-04-24T00:19:44.714204433Z" level=info msg="Container 3c01308dba1c191020a5778c8c88a2b47b91cdd739c45eaaec267871a5593cd5: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:19:44.720770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount802201556.mount: Deactivated successfully. Apr 24 00:19:44.726743 containerd[1553]: time="2026-04-24T00:19:44.726697499Z" level=info msg="CreateContainer within sandbox \"26ea9e760ce60e3623e116be593db82c8a9297995ad1cf6101d0529dbeaf33b8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3c01308dba1c191020a5778c8c88a2b47b91cdd739c45eaaec267871a5593cd5\"" Apr 24 00:19:44.727456 containerd[1553]: time="2026-04-24T00:19:44.727427150Z" level=info msg="StartContainer for \"3c01308dba1c191020a5778c8c88a2b47b91cdd739c45eaaec267871a5593cd5\"" Apr 24 00:19:44.728386 containerd[1553]: time="2026-04-24T00:19:44.728365483Z" level=info msg="connecting to shim 3c01308dba1c191020a5778c8c88a2b47b91cdd739c45eaaec267871a5593cd5" address="unix:///run/containerd/s/92bb7c016007c06a363a39e5c8177268894f80e2213a8dc251f43f26823e5206" protocol=ttrpc version=3 Apr 24 00:19:44.751410 systemd[1]: Started cri-containerd-3c01308dba1c191020a5778c8c88a2b47b91cdd739c45eaaec267871a5593cd5.scope - libcontainer container 3c01308dba1c191020a5778c8c88a2b47b91cdd739c45eaaec267871a5593cd5. Apr 24 00:19:44.793835 containerd[1553]: time="2026-04-24T00:19:44.793793306Z" level=info msg="StartContainer for \"3c01308dba1c191020a5778c8c88a2b47b91cdd739c45eaaec267871a5593cd5\" returns successfully" Apr 24 00:19:45.224512 containerd[1553]: time="2026-04-24T00:19:45.224251526Z" level=info msg="StopPodSandbox for \"5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84\"" Apr 24 00:19:45.224815 containerd[1553]: time="2026-04-24T00:19:45.224798748Z" level=info msg="TearDown network for sandbox \"5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84\" successfully" Apr 24 00:19:45.224935 containerd[1553]: time="2026-04-24T00:19:45.224892558Z" level=info msg="StopPodSandbox for \"5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84\" returns successfully" Apr 24 00:19:45.225468 containerd[1553]: time="2026-04-24T00:19:45.225437559Z" level=info msg="RemovePodSandbox for \"5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84\"" Apr 24 00:19:45.225608 containerd[1553]: time="2026-04-24T00:19:45.225541019Z" level=info msg="Forcibly stopping sandbox \"5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84\"" Apr 24 00:19:45.225691 containerd[1553]: time="2026-04-24T00:19:45.225677080Z" level=info msg="TearDown network for sandbox \"5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84\" successfully" Apr 24 00:19:45.227420 containerd[1553]: time="2026-04-24T00:19:45.227404154Z" level=info msg="Ensure that sandbox 5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84 in task-service has been cleanup successfully" Apr 24 00:19:45.230046 containerd[1553]: time="2026-04-24T00:19:45.230025702Z" level=info msg="RemovePodSandbox \"5c06128e0e230efc482cca93af860494c3a6e9a897ee0eef15f7b180ebaa5b84\" returns successfully" Apr 24 00:19:45.230736 containerd[1553]: time="2026-04-24T00:19:45.230691534Z" level=info msg="StopPodSandbox for \"1749da598afb3598633bb83356b9bdb20d815a7d1d9d51243b8ea70a6264d46c\"" Apr 24 00:19:45.231037 containerd[1553]: time="2026-04-24T00:19:45.230957665Z" level=info msg="TearDown network for sandbox \"1749da598afb3598633bb83356b9bdb20d815a7d1d9d51243b8ea70a6264d46c\" successfully" Apr 24 00:19:45.231133 containerd[1553]: time="2026-04-24T00:19:45.230977615Z" level=info msg="StopPodSandbox for \"1749da598afb3598633bb83356b9bdb20d815a7d1d9d51243b8ea70a6264d46c\" returns successfully" Apr 24 00:19:45.231621 containerd[1553]: time="2026-04-24T00:19:45.231605997Z" level=info msg="RemovePodSandbox for \"1749da598afb3598633bb83356b9bdb20d815a7d1d9d51243b8ea70a6264d46c\"" Apr 24 00:19:45.231770 containerd[1553]: time="2026-04-24T00:19:45.231673317Z" level=info msg="Forcibly stopping sandbox \"1749da598afb3598633bb83356b9bdb20d815a7d1d9d51243b8ea70a6264d46c\"" Apr 24 00:19:45.231854 containerd[1553]: time="2026-04-24T00:19:45.231838507Z" level=info msg="TearDown network for sandbox \"1749da598afb3598633bb83356b9bdb20d815a7d1d9d51243b8ea70a6264d46c\" successfully" Apr 24 00:19:45.233504 containerd[1553]: time="2026-04-24T00:19:45.233463631Z" level=info msg="Ensure that sandbox 1749da598afb3598633bb83356b9bdb20d815a7d1d9d51243b8ea70a6264d46c in task-service has been cleanup successfully" Apr 24 00:19:45.235271 containerd[1553]: time="2026-04-24T00:19:45.235218226Z" level=info msg="RemovePodSandbox \"1749da598afb3598633bb83356b9bdb20d815a7d1d9d51243b8ea70a6264d46c\" returns successfully" Apr 24 00:19:45.280351 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Apr 24 00:19:45.692381 kubelet[2742]: E0424 00:19:45.692341 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:19:45.726480 kubelet[2742]: I0424 00:19:45.725911 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-n87gj" podStartSLOduration=5.7258945820000005 podStartE2EDuration="5.725894582s" podCreationTimestamp="2026-04-24 00:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:19:45.725873412 +0000 UTC m=+180.624976869" watchObservedRunningTime="2026-04-24 00:19:45.725894582 +0000 UTC m=+180.624998039" Apr 24 00:19:47.251324 kubelet[2742]: E0424 00:19:47.251269 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:19:48.245687 systemd-networkd[1439]: lxc_health: Link UP Apr 24 00:19:48.249632 systemd-networkd[1439]: lxc_health: Gained carrier Apr 24 00:19:49.265944 kubelet[2742]: E0424 00:19:49.265084 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:19:49.432475 systemd-networkd[1439]: lxc_health: Gained IPv6LL Apr 24 00:19:49.701612 kubelet[2742]: E0424 00:19:49.701435 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:19:50.704327 kubelet[2742]: E0424 00:19:50.703303 2742 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Apr 24 00:19:53.811323 sshd[4602]: Connection closed by 20.229.252.112 port 46132 Apr 24 00:19:53.811009 systemd[1]: sshd@25-172.234.204.89:22-20.229.252.112:46132.service: Deactivated successfully. Apr 24 00:19:53.805561 sshd-session[4599]: pam_unix(sshd:session): session closed for user core Apr 24 00:19:53.814950 systemd[1]: session-24.scope: Deactivated successfully. Apr 24 00:19:53.820175 systemd-logind[1532]: Session 24 logged out. Waiting for processes to exit. Apr 24 00:19:53.824021 systemd-logind[1532]: Removed session 24.