Nov 4 05:00:50.215185 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 4 03:00:51 -00 2025 Nov 4 05:00:50.215212 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c479bf273e218e23ca82ede45f2bfcd1a1714a33fe5860e964ed0aea09538f01 Nov 4 05:00:50.215222 kernel: BIOS-provided physical RAM map: Nov 4 05:00:50.215896 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Nov 4 05:00:50.215904 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Nov 4 05:00:50.215911 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 4 05:00:50.215922 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Nov 4 05:00:50.215929 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Nov 4 05:00:50.215935 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 4 05:00:50.215942 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 4 05:00:50.215948 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 4 05:00:50.215955 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 4 05:00:50.215961 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Nov 4 05:00:50.215968 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 4 05:00:50.215979 kernel: NX (Execute Disable) protection: active Nov 4 05:00:50.215986 kernel: APIC: Static calls initialized Nov 4 05:00:50.215993 kernel: SMBIOS 2.8 present. Nov 4 05:00:50.216000 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Nov 4 05:00:50.216007 kernel: DMI: Memory slots populated: 1/1 Nov 4 05:00:50.216017 kernel: Hypervisor detected: KVM Nov 4 05:00:50.216024 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 4 05:00:50.216031 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 4 05:00:50.216038 kernel: kvm-clock: using sched offset of 6206214960 cycles Nov 4 05:00:50.216046 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 4 05:00:50.216053 kernel: tsc: Detected 2000.000 MHz processor Nov 4 05:00:50.216061 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 4 05:00:50.216069 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 4 05:00:50.216079 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Nov 4 05:00:50.216086 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 4 05:00:50.216094 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 4 05:00:50.216101 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 4 05:00:50.216108 kernel: Using GB pages for direct mapping Nov 4 05:00:50.216116 kernel: ACPI: Early table checksum verification disabled Nov 4 05:00:50.216123 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Nov 4 05:00:50.216130 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 05:00:50.216139 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 05:00:50.216147 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 05:00:50.216154 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 4 05:00:50.216162 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 05:00:50.216169 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 05:00:50.216180 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 05:00:50.216190 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 05:00:50.216198 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Nov 4 05:00:50.216206 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Nov 4 05:00:50.216213 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 4 05:00:50.216221 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Nov 4 05:00:50.216259 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Nov 4 05:00:50.216268 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Nov 4 05:00:50.216276 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Nov 4 05:00:50.216284 kernel: No NUMA configuration found Nov 4 05:00:50.216292 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Nov 4 05:00:50.216299 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Nov 4 05:00:50.216307 kernel: Zone ranges: Nov 4 05:00:50.216318 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 4 05:00:50.216325 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 4 05:00:50.216333 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Nov 4 05:00:50.216340 kernel: Device empty Nov 4 05:00:50.216348 kernel: Movable zone start for each node Nov 4 05:00:50.216356 kernel: Early memory node ranges Nov 4 05:00:50.216363 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 4 05:00:50.216371 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Nov 4 05:00:50.216381 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Nov 4 05:00:50.216389 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Nov 4 05:00:50.216396 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 4 05:00:50.216404 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 4 05:00:50.216411 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Nov 4 05:00:50.216419 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 4 05:00:50.216426 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 4 05:00:50.216437 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 4 05:00:50.216444 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 4 05:00:50.216452 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 4 05:00:50.216459 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 4 05:00:50.216467 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 4 05:00:50.216475 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 4 05:00:50.216482 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 4 05:00:50.216492 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 4 05:00:50.216499 kernel: TSC deadline timer available Nov 4 05:00:50.216507 kernel: CPU topo: Max. logical packages: 1 Nov 4 05:00:50.216514 kernel: CPU topo: Max. logical dies: 1 Nov 4 05:00:50.216522 kernel: CPU topo: Max. dies per package: 1 Nov 4 05:00:50.216530 kernel: CPU topo: Max. threads per core: 1 Nov 4 05:00:50.216537 kernel: CPU topo: Num. cores per package: 2 Nov 4 05:00:50.216545 kernel: CPU topo: Num. threads per package: 2 Nov 4 05:00:50.216555 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 4 05:00:50.216562 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 4 05:00:50.216570 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 4 05:00:50.216577 kernel: kvm-guest: setup PV sched yield Nov 4 05:00:50.216585 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 4 05:00:50.216592 kernel: Booting paravirtualized kernel on KVM Nov 4 05:00:50.216600 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 4 05:00:50.216610 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 4 05:00:50.216618 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 4 05:00:50.216625 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 4 05:00:50.216633 kernel: pcpu-alloc: [0] 0 1 Nov 4 05:00:50.216640 kernel: kvm-guest: PV spinlocks enabled Nov 4 05:00:50.216648 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 4 05:00:50.216657 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c479bf273e218e23ca82ede45f2bfcd1a1714a33fe5860e964ed0aea09538f01 Nov 4 05:00:50.216667 kernel: random: crng init done Nov 4 05:00:50.216675 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 4 05:00:50.216683 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 4 05:00:50.216690 kernel: Fallback order for Node 0: 0 Nov 4 05:00:50.216698 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Nov 4 05:00:50.216705 kernel: Policy zone: Normal Nov 4 05:00:50.216713 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 4 05:00:50.216723 kernel: software IO TLB: area num 2. Nov 4 05:00:50.216731 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 4 05:00:50.216738 kernel: ftrace: allocating 40092 entries in 157 pages Nov 4 05:00:50.216746 kernel: ftrace: allocated 157 pages with 5 groups Nov 4 05:00:50.216753 kernel: Dynamic Preempt: voluntary Nov 4 05:00:50.216761 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 4 05:00:50.216769 kernel: rcu: RCU event tracing is enabled. Nov 4 05:00:50.216779 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 4 05:00:50.216787 kernel: Trampoline variant of Tasks RCU enabled. Nov 4 05:00:50.216795 kernel: Rude variant of Tasks RCU enabled. Nov 4 05:00:50.216802 kernel: Tracing variant of Tasks RCU enabled. Nov 4 05:00:50.216809 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 4 05:00:50.216817 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 4 05:00:50.216825 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 05:00:50.216842 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 05:00:50.216850 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 05:00:50.216903 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 4 05:00:50.216956 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 4 05:00:50.217006 kernel: Console: colour VGA+ 80x25 Nov 4 05:00:50.217018 kernel: printk: legacy console [tty0] enabled Nov 4 05:00:50.217026 kernel: printk: legacy console [ttyS0] enabled Nov 4 05:00:50.217035 kernel: ACPI: Core revision 20240827 Nov 4 05:00:50.217048 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 4 05:00:50.217056 kernel: APIC: Switch to symmetric I/O mode setup Nov 4 05:00:50.217064 kernel: x2apic enabled Nov 4 05:00:50.217072 kernel: APIC: Switched APIC routing to: physical x2apic Nov 4 05:00:50.217081 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 4 05:00:50.217089 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 4 05:00:50.217099 kernel: kvm-guest: setup PV IPIs Nov 4 05:00:50.217107 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 4 05:00:50.217116 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Nov 4 05:00:50.217124 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Nov 4 05:00:50.217132 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 4 05:00:50.217140 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 4 05:00:50.217148 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 4 05:00:50.217159 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 4 05:00:50.217167 kernel: Spectre V2 : Mitigation: Retpolines Nov 4 05:00:50.217175 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 4 05:00:50.217183 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 4 05:00:50.217191 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 4 05:00:50.217199 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 4 05:00:50.217207 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 4 05:00:50.217218 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 4 05:00:50.218756 kernel: active return thunk: srso_alias_return_thunk Nov 4 05:00:50.218768 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 4 05:00:50.218777 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Nov 4 05:00:50.218786 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Nov 4 05:00:50.218794 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 4 05:00:50.218803 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 4 05:00:50.218815 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 4 05:00:50.218823 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 4 05:00:50.218831 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 4 05:00:50.218839 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Nov 4 05:00:50.218847 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Nov 4 05:00:50.218855 kernel: Freeing SMP alternatives memory: 32K Nov 4 05:00:50.218863 kernel: pid_max: default: 32768 minimum: 301 Nov 4 05:00:50.218874 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 4 05:00:50.218882 kernel: landlock: Up and running. Nov 4 05:00:50.218890 kernel: SELinux: Initializing. Nov 4 05:00:50.218898 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 05:00:50.218906 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 05:00:50.218914 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Nov 4 05:00:50.218922 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 4 05:00:50.218932 kernel: ... version: 0 Nov 4 05:00:50.218940 kernel: ... bit width: 48 Nov 4 05:00:50.218948 kernel: ... generic registers: 6 Nov 4 05:00:50.218956 kernel: ... value mask: 0000ffffffffffff Nov 4 05:00:50.218965 kernel: ... max period: 00007fffffffffff Nov 4 05:00:50.218972 kernel: ... fixed-purpose events: 0 Nov 4 05:00:50.218980 kernel: ... event mask: 000000000000003f Nov 4 05:00:50.218991 kernel: signal: max sigframe size: 3376 Nov 4 05:00:50.218999 kernel: rcu: Hierarchical SRCU implementation. Nov 4 05:00:50.219007 kernel: rcu: Max phase no-delay instances is 400. Nov 4 05:00:50.219015 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 4 05:00:50.219023 kernel: smp: Bringing up secondary CPUs ... Nov 4 05:00:50.219032 kernel: smpboot: x86: Booting SMP configuration: Nov 4 05:00:50.219039 kernel: .... node #0, CPUs: #1 Nov 4 05:00:50.219050 kernel: smp: Brought up 1 node, 2 CPUs Nov 4 05:00:50.219058 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Nov 4 05:00:50.219066 kernel: Memory: 3979480K/4193772K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15360K init, 2684K bss, 208864K reserved, 0K cma-reserved) Nov 4 05:00:50.219074 kernel: devtmpfs: initialized Nov 4 05:00:50.219082 kernel: x86/mm: Memory block size: 128MB Nov 4 05:00:50.219090 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 4 05:00:50.219098 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 4 05:00:50.219109 kernel: pinctrl core: initialized pinctrl subsystem Nov 4 05:00:50.219117 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 4 05:00:50.219125 kernel: audit: initializing netlink subsys (disabled) Nov 4 05:00:50.219133 kernel: audit: type=2000 audit(1762232446.503:1): state=initialized audit_enabled=0 res=1 Nov 4 05:00:50.219141 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 4 05:00:50.219149 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 4 05:00:50.219157 kernel: cpuidle: using governor menu Nov 4 05:00:50.219167 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 4 05:00:50.219175 kernel: dca service started, version 1.12.1 Nov 4 05:00:50.219183 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 4 05:00:50.219192 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 4 05:00:50.219200 kernel: PCI: Using configuration type 1 for base access Nov 4 05:00:50.219208 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 4 05:00:50.219216 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 4 05:00:50.219253 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 4 05:00:50.219264 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 4 05:00:50.219272 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 4 05:00:50.219280 kernel: ACPI: Added _OSI(Module Device) Nov 4 05:00:50.219288 kernel: ACPI: Added _OSI(Processor Device) Nov 4 05:00:50.219296 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 4 05:00:50.219304 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 4 05:00:50.219312 kernel: ACPI: Interpreter enabled Nov 4 05:00:50.219324 kernel: ACPI: PM: (supports S0 S3 S5) Nov 4 05:00:50.219332 kernel: ACPI: Using IOAPIC for interrupt routing Nov 4 05:00:50.219341 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 4 05:00:50.219349 kernel: PCI: Using E820 reservations for host bridge windows Nov 4 05:00:50.219357 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 4 05:00:50.219365 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 4 05:00:50.219625 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 4 05:00:50.219822 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 4 05:00:50.220009 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 4 05:00:50.220019 kernel: PCI host bridge to bus 0000:00 Nov 4 05:00:50.220202 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 4 05:00:50.220571 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 4 05:00:50.220754 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 4 05:00:50.220955 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 4 05:00:50.221131 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 4 05:00:50.221931 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Nov 4 05:00:50.222107 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 4 05:00:50.222365 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 4 05:00:50.222573 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 4 05:00:50.222755 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Nov 4 05:00:50.222965 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Nov 4 05:00:50.223155 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Nov 4 05:00:50.223399 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 4 05:00:50.223605 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Nov 4 05:00:50.223784 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Nov 4 05:00:50.223960 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Nov 4 05:00:50.224136 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Nov 4 05:00:50.224393 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 4 05:00:50.224581 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Nov 4 05:00:50.224766 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Nov 4 05:00:50.224942 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Nov 4 05:00:50.225514 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Nov 4 05:00:50.228079 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 4 05:00:50.228310 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 4 05:00:50.228514 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 4 05:00:50.228692 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Nov 4 05:00:50.228866 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Nov 4 05:00:50.229048 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 4 05:00:50.229244 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 4 05:00:50.229261 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 4 05:00:50.229276 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 4 05:00:50.229285 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 4 05:00:50.229293 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 4 05:00:50.229301 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 4 05:00:50.229310 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 4 05:00:50.229318 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 4 05:00:50.229326 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 4 05:00:50.229337 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 4 05:00:50.229346 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 4 05:00:50.229354 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 4 05:00:50.229363 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 4 05:00:50.229371 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 4 05:00:50.229380 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 4 05:00:50.229388 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 4 05:00:50.229399 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 4 05:00:50.229407 kernel: iommu: Default domain type: Translated Nov 4 05:00:50.229416 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 4 05:00:50.229424 kernel: PCI: Using ACPI for IRQ routing Nov 4 05:00:50.229432 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 4 05:00:50.229441 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Nov 4 05:00:50.229449 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Nov 4 05:00:50.229642 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 4 05:00:50.229819 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 4 05:00:50.229991 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 4 05:00:50.230002 kernel: vgaarb: loaded Nov 4 05:00:50.230010 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 4 05:00:50.230019 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 4 05:00:50.230027 kernel: clocksource: Switched to clocksource kvm-clock Nov 4 05:00:50.230039 kernel: VFS: Disk quotas dquot_6.6.0 Nov 4 05:00:50.230047 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 4 05:00:50.230055 kernel: pnp: PnP ACPI init Nov 4 05:00:50.231118 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 4 05:00:50.231135 kernel: pnp: PnP ACPI: found 5 devices Nov 4 05:00:50.231145 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 4 05:00:50.231158 kernel: NET: Registered PF_INET protocol family Nov 4 05:00:50.231166 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 4 05:00:50.231175 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 4 05:00:50.231183 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 4 05:00:50.231192 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 4 05:00:50.231200 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 4 05:00:50.231209 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 4 05:00:50.231220 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 05:00:50.231264 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 05:00:50.231274 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 4 05:00:50.231283 kernel: NET: Registered PF_XDP protocol family Nov 4 05:00:50.231465 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 4 05:00:50.231632 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 4 05:00:50.236262 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 4 05:00:50.236452 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 4 05:00:50.236617 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 4 05:00:50.236780 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Nov 4 05:00:50.236792 kernel: PCI: CLS 0 bytes, default 64 Nov 4 05:00:50.236801 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 4 05:00:50.236810 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Nov 4 05:00:50.236819 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Nov 4 05:00:50.236831 kernel: Initialise system trusted keyrings Nov 4 05:00:50.236840 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 4 05:00:50.236849 kernel: Key type asymmetric registered Nov 4 05:00:50.236858 kernel: Asymmetric key parser 'x509' registered Nov 4 05:00:50.236867 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 4 05:00:50.236876 kernel: io scheduler mq-deadline registered Nov 4 05:00:50.236884 kernel: io scheduler kyber registered Nov 4 05:00:50.236895 kernel: io scheduler bfq registered Nov 4 05:00:50.236904 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 4 05:00:50.236913 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 4 05:00:50.236922 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 4 05:00:50.236950 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 4 05:00:50.236962 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 4 05:00:50.236971 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 4 05:00:50.236984 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 4 05:00:50.236993 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 4 05:00:50.237002 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 4 05:00:50.237202 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 4 05:00:50.237440 kernel: rtc_cmos 00:03: registered as rtc0 Nov 4 05:00:50.237618 kernel: rtc_cmos 00:03: setting system clock to 2025-11-04T05:00:48 UTC (1762232448) Nov 4 05:00:50.237794 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 4 05:00:50.237806 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 4 05:00:50.237815 kernel: NET: Registered PF_INET6 protocol family Nov 4 05:00:50.237824 kernel: Segment Routing with IPv6 Nov 4 05:00:50.237833 kernel: In-situ OAM (IOAM) with IPv6 Nov 4 05:00:50.237841 kernel: NET: Registered PF_PACKET protocol family Nov 4 05:00:50.237850 kernel: Key type dns_resolver registered Nov 4 05:00:50.237861 kernel: IPI shorthand broadcast: enabled Nov 4 05:00:50.237870 kernel: sched_clock: Marking stable (1755010260, 358539000)->(2261649260, -148100000) Nov 4 05:00:50.237879 kernel: registered taskstats version 1 Nov 4 05:00:50.237887 kernel: Loading compiled-in X.509 certificates Nov 4 05:00:50.237896 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: dafbe857b8ef9eaad4381fdddb57853ce023547e' Nov 4 05:00:50.237905 kernel: Demotion targets for Node 0: null Nov 4 05:00:50.237913 kernel: Key type .fscrypt registered Nov 4 05:00:50.237921 kernel: Key type fscrypt-provisioning registered Nov 4 05:00:50.237932 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 4 05:00:50.237940 kernel: ima: Allocated hash algorithm: sha1 Nov 4 05:00:50.237948 kernel: ima: No architecture policies found Nov 4 05:00:50.237957 kernel: clk: Disabling unused clocks Nov 4 05:00:50.237965 kernel: Freeing unused kernel image (initmem) memory: 15360K Nov 4 05:00:50.237973 kernel: Write protecting the kernel read-only data: 45056k Nov 4 05:00:50.237983 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Nov 4 05:00:50.237992 kernel: Run /init as init process Nov 4 05:00:50.238000 kernel: with arguments: Nov 4 05:00:50.238009 kernel: /init Nov 4 05:00:50.238017 kernel: with environment: Nov 4 05:00:50.238025 kernel: HOME=/ Nov 4 05:00:50.238048 kernel: TERM=linux Nov 4 05:00:50.238060 kernel: SCSI subsystem initialized Nov 4 05:00:50.238070 kernel: libata version 3.00 loaded. Nov 4 05:00:50.238327 kernel: ahci 0000:00:1f.2: version 3.0 Nov 4 05:00:50.238344 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 4 05:00:50.238529 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 4 05:00:50.238711 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 4 05:00:50.238893 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 4 05:00:50.239319 kernel: scsi host0: ahci Nov 4 05:00:50.239529 kernel: scsi host1: ahci Nov 4 05:00:50.239726 kernel: scsi host2: ahci Nov 4 05:00:50.239917 kernel: scsi host3: ahci Nov 4 05:00:50.240106 kernel: scsi host4: ahci Nov 4 05:00:50.240385 kernel: scsi host5: ahci Nov 4 05:00:50.240403 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 24 lpm-pol 1 Nov 4 05:00:50.240412 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 24 lpm-pol 1 Nov 4 05:00:50.240422 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 24 lpm-pol 1 Nov 4 05:00:50.240431 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 24 lpm-pol 1 Nov 4 05:00:50.240439 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 24 lpm-pol 1 Nov 4 05:00:50.240448 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 24 lpm-pol 1 Nov 4 05:00:50.240462 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 4 05:00:50.240473 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 4 05:00:50.240482 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 4 05:00:50.240491 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 4 05:00:50.240499 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 4 05:00:50.240508 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 4 05:00:50.240705 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Nov 4 05:00:50.240900 kernel: scsi host6: Virtio SCSI HBA Nov 4 05:00:50.241111 kernel: scsi 6:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 4 05:00:50.241406 kernel: sd 6:0:0:0: Power-on or device reset occurred Nov 4 05:00:50.241618 kernel: sd 6:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Nov 4 05:00:50.241814 kernel: sd 6:0:0:0: [sda] Write Protect is off Nov 4 05:00:50.242013 kernel: sd 6:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 4 05:00:50.242208 kernel: sd 6:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 4 05:00:50.242220 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 4 05:00:50.242255 kernel: GPT:25804799 != 167739391 Nov 4 05:00:50.242267 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 4 05:00:50.242277 kernel: GPT:25804799 != 167739391 Nov 4 05:00:50.242285 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 4 05:00:50.242299 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 4 05:00:50.242510 kernel: sd 6:0:0:0: [sda] Attached SCSI disk Nov 4 05:00:50.242523 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 4 05:00:50.242532 kernel: device-mapper: uevent: version 1.0.3 Nov 4 05:00:50.242541 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 4 05:00:50.242549 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 4 05:00:50.242562 kernel: raid6: avx2x4 gen() 37160 MB/s Nov 4 05:00:50.242572 kernel: raid6: avx2x2 gen() 38201 MB/s Nov 4 05:00:50.242581 kernel: raid6: avx2x1 gen() 26677 MB/s Nov 4 05:00:50.242590 kernel: raid6: using algorithm avx2x2 gen() 38201 MB/s Nov 4 05:00:50.242599 kernel: raid6: .... xor() 29583 MB/s, rmw enabled Nov 4 05:00:50.242610 kernel: raid6: using avx2x2 recovery algorithm Nov 4 05:00:50.242619 kernel: xor: automatically using best checksumming function avx Nov 4 05:00:50.242628 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 4 05:00:50.242637 kernel: BTRFS: device fsid 6f0a5369-79b6-4a87-b9a6-85ec05be306c devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (167) Nov 4 05:00:50.242646 kernel: BTRFS info (device dm-0): first mount of filesystem 6f0a5369-79b6-4a87-b9a6-85ec05be306c Nov 4 05:00:50.242655 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 4 05:00:50.242664 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 4 05:00:50.242676 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 4 05:00:50.242684 kernel: BTRFS info (device dm-0): enabling free space tree Nov 4 05:00:50.242693 kernel: loop: module loaded Nov 4 05:00:50.242702 kernel: loop0: detected capacity change from 0 to 100136 Nov 4 05:00:50.242711 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 4 05:00:50.242721 systemd[1]: Successfully made /usr/ read-only. Nov 4 05:00:50.242735 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 05:00:50.242745 systemd[1]: Detected virtualization kvm. Nov 4 05:00:50.242754 systemd[1]: Detected architecture x86-64. Nov 4 05:00:50.242763 systemd[1]: Running in initrd. Nov 4 05:00:50.242772 systemd[1]: No hostname configured, using default hostname. Nov 4 05:00:50.242781 systemd[1]: Hostname set to . Nov 4 05:00:50.242792 systemd[1]: Initializing machine ID from random generator. Nov 4 05:00:50.242801 systemd[1]: Queued start job for default target initrd.target. Nov 4 05:00:50.242811 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 05:00:50.242819 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 05:00:50.242829 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 05:00:50.242838 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 4 05:00:50.242847 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 05:00:50.242859 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 4 05:00:50.242868 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 4 05:00:50.242877 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 05:00:50.242886 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 05:00:50.242896 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 4 05:00:50.242907 systemd[1]: Reached target paths.target - Path Units. Nov 4 05:00:50.242917 systemd[1]: Reached target slices.target - Slice Units. Nov 4 05:00:50.242925 systemd[1]: Reached target swap.target - Swaps. Nov 4 05:00:50.242934 systemd[1]: Reached target timers.target - Timer Units. Nov 4 05:00:50.242943 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 05:00:50.242952 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 05:00:50.242961 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 4 05:00:50.242972 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 4 05:00:50.242982 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 05:00:50.242991 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 05:00:50.243000 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 05:00:50.243009 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 05:00:50.243018 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 4 05:00:50.243027 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 4 05:00:50.243039 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 05:00:50.243048 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 4 05:00:50.243058 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 4 05:00:50.243067 systemd[1]: Starting systemd-fsck-usr.service... Nov 4 05:00:50.243076 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 05:00:50.243085 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 05:00:50.243094 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 05:00:50.243106 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 4 05:00:50.243116 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 05:00:50.243125 systemd[1]: Finished systemd-fsck-usr.service. Nov 4 05:00:50.243136 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 05:00:50.243170 systemd-journald[303]: Collecting audit messages is disabled. Nov 4 05:00:50.243191 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 4 05:00:50.243203 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 05:00:50.243212 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 05:00:50.243222 systemd-journald[303]: Journal started Nov 4 05:00:50.243267 systemd-journald[303]: Runtime Journal (/run/log/journal/27bb3c11e5a94617ac6801675dacbfcd) is 8M, max 78.1M, 70.1M free. Nov 4 05:00:50.250836 kernel: Bridge firewalling registered Nov 4 05:00:50.250911 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 05:00:50.251986 systemd-modules-load[304]: Inserted module 'br_netfilter' Nov 4 05:00:50.257296 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 05:00:50.264451 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 05:00:50.272391 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 05:00:50.363057 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 05:00:50.366087 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 05:00:50.372423 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 4 05:00:50.376921 systemd-tmpfiles[323]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 4 05:00:50.391586 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 05:00:50.393146 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 05:00:50.400272 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 05:00:50.411442 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 05:00:50.415674 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 4 05:00:50.441804 dracut-cmdline[344]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c479bf273e218e23ca82ede45f2bfcd1a1714a33fe5860e964ed0aea09538f01 Nov 4 05:00:50.469076 systemd-resolved[335]: Positive Trust Anchors: Nov 4 05:00:50.469092 systemd-resolved[335]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 05:00:50.469096 systemd-resolved[335]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 05:00:50.469124 systemd-resolved[335]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 05:00:50.497625 systemd-resolved[335]: Defaulting to hostname 'linux'. Nov 4 05:00:50.499138 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 05:00:50.501382 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 05:00:50.574278 kernel: Loading iSCSI transport class v2.0-870. Nov 4 05:00:50.591277 kernel: iscsi: registered transport (tcp) Nov 4 05:00:50.615326 kernel: iscsi: registered transport (qla4xxx) Nov 4 05:00:50.615387 kernel: QLogic iSCSI HBA Driver Nov 4 05:00:50.648834 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 05:00:50.667318 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 05:00:50.671125 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 05:00:50.733456 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 4 05:00:50.736385 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 4 05:00:50.739384 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 4 05:00:50.780020 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 4 05:00:50.784400 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 05:00:50.815407 systemd-udevd[586]: Using default interface naming scheme 'v257'. Nov 4 05:00:50.829000 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 05:00:50.833430 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 4 05:00:50.862924 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 05:00:50.886451 dracut-pre-trigger[656]: rd.md=0: removing MD RAID activation Nov 4 05:00:50.890384 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 05:00:50.902890 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 05:00:50.907374 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 05:00:50.943254 systemd-networkd[705]: lo: Link UP Nov 4 05:00:50.944269 systemd-networkd[705]: lo: Gained carrier Nov 4 05:00:50.944906 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 05:00:50.946546 systemd[1]: Reached target network.target - Network. Nov 4 05:00:51.007383 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 05:00:51.011845 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 4 05:00:51.119489 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 4 05:00:51.137424 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 4 05:00:51.153025 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 4 05:00:51.156122 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 4 05:00:51.166189 kernel: cryptd: max_cpu_qlen set to 1000 Nov 4 05:00:51.301011 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 05:00:51.301134 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 05:00:51.320597 kernel: AES CTR mode by8 optimization enabled Nov 4 05:00:51.320630 disk-uuid[759]: Primary Header is updated. Nov 4 05:00:51.320630 disk-uuid[759]: Secondary Entries is updated. Nov 4 05:00:51.320630 disk-uuid[759]: Secondary Header is updated. Nov 4 05:00:51.355530 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 4 05:00:51.302172 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 05:00:51.307299 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 05:00:51.405461 systemd-networkd[705]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 05:00:51.405473 systemd-networkd[705]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 05:00:51.407559 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 4 05:00:51.409951 systemd-networkd[705]: eth0: Link UP Nov 4 05:00:51.410175 systemd-networkd[705]: eth0: Gained carrier Nov 4 05:00:51.410187 systemd-networkd[705]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 05:00:51.573419 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 4 05:00:51.576977 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 05:00:51.580426 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 05:00:51.581544 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 05:00:51.584024 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 05:00:51.589040 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 4 05:00:51.618054 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 4 05:00:52.161319 systemd-networkd[705]: eth0: DHCPv4 address 172.237.150.130/24, gateway 172.237.150.1 acquired from 23.194.118.57 Nov 4 05:00:52.436288 disk-uuid[761]: Warning: The kernel is still using the old partition table. Nov 4 05:00:52.436288 disk-uuid[761]: The new table will be used at the next reboot or after you Nov 4 05:00:52.436288 disk-uuid[761]: run partprobe(8) or kpartx(8) Nov 4 05:00:52.436288 disk-uuid[761]: The operation has completed successfully. Nov 4 05:00:52.446565 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 4 05:00:52.446742 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 4 05:00:52.449808 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 4 05:00:52.504287 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (855) Nov 4 05:00:52.511331 kernel: BTRFS info (device sda6): first mount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 05:00:52.511518 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 05:00:52.518738 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 4 05:00:52.518770 kernel: BTRFS info (device sda6): turning on async discard Nov 4 05:00:52.518787 kernel: BTRFS info (device sda6): enabling free space tree Nov 4 05:00:52.531270 kernel: BTRFS info (device sda6): last unmount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 05:00:52.532923 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 4 05:00:52.535424 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 4 05:00:52.693742 ignition[874]: Ignition 2.22.0 Nov 4 05:00:52.693766 ignition[874]: Stage: fetch-offline Nov 4 05:00:52.693816 ignition[874]: no configs at "/usr/lib/ignition/base.d" Nov 4 05:00:52.693845 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 4 05:00:52.696746 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 05:00:52.693958 ignition[874]: parsed url from cmdline: "" Nov 4 05:00:52.700452 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 4 05:00:52.693964 ignition[874]: no config URL provided Nov 4 05:00:52.693972 ignition[874]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 05:00:52.693985 ignition[874]: no config at "/usr/lib/ignition/user.ign" Nov 4 05:00:52.693992 ignition[874]: failed to fetch config: resource requires networking Nov 4 05:00:52.694388 ignition[874]: Ignition finished successfully Nov 4 05:00:52.741668 ignition[881]: Ignition 2.22.0 Nov 4 05:00:52.741691 ignition[881]: Stage: fetch Nov 4 05:00:52.742134 ignition[881]: no configs at "/usr/lib/ignition/base.d" Nov 4 05:00:52.742147 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 4 05:00:52.742279 ignition[881]: parsed url from cmdline: "" Nov 4 05:00:52.742285 ignition[881]: no config URL provided Nov 4 05:00:52.742292 ignition[881]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 05:00:52.742302 ignition[881]: no config at "/usr/lib/ignition/user.ign" Nov 4 05:00:52.742332 ignition[881]: PUT http://169.254.169.254/v1/token: attempt #1 Nov 4 05:00:52.835192 ignition[881]: PUT result: OK Nov 4 05:00:52.835369 ignition[881]: GET http://169.254.169.254/v1/user-data: attempt #1 Nov 4 05:00:52.976731 ignition[881]: GET result: OK Nov 4 05:00:52.976854 ignition[881]: parsing config with SHA512: c85496a9f0431fa5c7a18b3d6d0ac2d5d7adb524fbf8b0e8e972a00101cfbdd61e17604c69f763cb98806e1dafee5934d63891854a831d9a11ddee3dc310750f Nov 4 05:00:52.983651 unknown[881]: fetched base config from "system" Nov 4 05:00:52.983670 unknown[881]: fetched base config from "system" Nov 4 05:00:52.984122 ignition[881]: fetch: fetch complete Nov 4 05:00:52.983678 unknown[881]: fetched user config from "akamai" Nov 4 05:00:52.984130 ignition[881]: fetch: fetch passed Nov 4 05:00:52.987437 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 4 05:00:52.984189 ignition[881]: Ignition finished successfully Nov 4 05:00:52.991171 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 4 05:00:53.025739 ignition[888]: Ignition 2.22.0 Nov 4 05:00:53.025761 ignition[888]: Stage: kargs Nov 4 05:00:53.025916 ignition[888]: no configs at "/usr/lib/ignition/base.d" Nov 4 05:00:53.025927 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 4 05:00:53.026688 ignition[888]: kargs: kargs passed Nov 4 05:00:53.026748 ignition[888]: Ignition finished successfully Nov 4 05:00:53.033336 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 4 05:00:53.036966 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 4 05:00:53.072742 ignition[895]: Ignition 2.22.0 Nov 4 05:00:53.072764 ignition[895]: Stage: disks Nov 4 05:00:53.072907 ignition[895]: no configs at "/usr/lib/ignition/base.d" Nov 4 05:00:53.072919 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 4 05:00:53.073904 ignition[895]: disks: disks passed Nov 4 05:00:53.073954 ignition[895]: Ignition finished successfully Nov 4 05:00:53.077324 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 4 05:00:53.079265 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 4 05:00:53.102643 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 4 05:00:53.104642 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 05:00:53.106690 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 05:00:53.108628 systemd[1]: Reached target basic.target - Basic System. Nov 4 05:00:53.111929 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 4 05:00:53.153340 systemd-fsck[903]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Nov 4 05:00:53.156812 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 4 05:00:53.158957 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 4 05:00:53.286257 kernel: EXT4-fs (sda9): mounted filesystem c35327fb-3cdd-496e-85aa-9e1b4133507f r/w with ordered data mode. Quota mode: none. Nov 4 05:00:53.286936 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 4 05:00:53.288423 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 4 05:00:53.291311 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 05:00:53.295316 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 4 05:00:53.297651 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 4 05:00:53.297717 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 4 05:00:53.297755 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 05:00:53.308190 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 4 05:00:53.311321 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 4 05:00:53.318261 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (911) Nov 4 05:00:53.322843 kernel: BTRFS info (device sda6): first mount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 05:00:53.322893 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 05:00:53.337218 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 4 05:00:53.337282 kernel: BTRFS info (device sda6): turning on async discard Nov 4 05:00:53.337297 kernel: BTRFS info (device sda6): enabling free space tree Nov 4 05:00:53.340890 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 05:00:53.389615 initrd-setup-root[935]: cut: /sysroot/etc/passwd: No such file or directory Nov 4 05:00:53.396532 initrd-setup-root[942]: cut: /sysroot/etc/group: No such file or directory Nov 4 05:00:53.402067 initrd-setup-root[949]: cut: /sysroot/etc/shadow: No such file or directory Nov 4 05:00:53.407966 initrd-setup-root[956]: cut: /sysroot/etc/gshadow: No such file or directory Nov 4 05:00:53.439479 systemd-networkd[705]: eth0: Gained IPv6LL Nov 4 05:00:53.539798 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 4 05:00:53.542763 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 4 05:00:53.546371 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 4 05:00:53.560929 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 4 05:00:53.566256 kernel: BTRFS info (device sda6): last unmount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 05:00:53.585164 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 4 05:00:53.603810 ignition[1026]: INFO : Ignition 2.22.0 Nov 4 05:00:53.603810 ignition[1026]: INFO : Stage: mount Nov 4 05:00:53.607835 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 05:00:53.607835 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 4 05:00:53.607835 ignition[1026]: INFO : mount: mount passed Nov 4 05:00:53.607835 ignition[1026]: INFO : Ignition finished successfully Nov 4 05:00:53.608378 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 4 05:00:53.611358 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 4 05:00:53.635508 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 05:00:53.660265 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1037) Nov 4 05:00:53.665399 kernel: BTRFS info (device sda6): first mount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 05:00:53.665428 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 05:00:53.675740 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 4 05:00:53.675805 kernel: BTRFS info (device sda6): turning on async discard Nov 4 05:00:53.675840 kernel: BTRFS info (device sda6): enabling free space tree Nov 4 05:00:53.680981 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 05:00:53.714912 ignition[1053]: INFO : Ignition 2.22.0 Nov 4 05:00:53.714912 ignition[1053]: INFO : Stage: files Nov 4 05:00:53.717387 ignition[1053]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 05:00:53.717387 ignition[1053]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 4 05:00:53.717387 ignition[1053]: DEBUG : files: compiled without relabeling support, skipping Nov 4 05:00:53.717387 ignition[1053]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 4 05:00:53.717387 ignition[1053]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 4 05:00:53.725193 ignition[1053]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 4 05:00:53.725193 ignition[1053]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 4 05:00:53.725193 ignition[1053]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 4 05:00:53.725193 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 05:00:53.725193 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 4 05:00:53.722369 unknown[1053]: wrote ssh authorized keys file for user: core Nov 4 05:00:53.988278 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 4 05:00:54.190076 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 05:00:54.191940 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 4 05:00:54.191940 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 4 05:00:54.421855 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 4 05:00:54.499888 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 4 05:00:54.499888 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 4 05:00:54.502970 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 4 05:00:54.502970 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 4 05:00:54.502970 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 4 05:00:54.502970 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 05:00:54.502970 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 05:00:54.502970 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 05:00:54.513601 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 05:00:54.513601 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 05:00:54.513601 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 05:00:54.513601 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 4 05:00:54.513601 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 4 05:00:54.513601 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 4 05:00:54.513601 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 4 05:00:54.834885 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 4 05:00:55.136731 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 4 05:00:55.136731 ignition[1053]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 4 05:00:55.161880 ignition[1053]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 05:00:55.161880 ignition[1053]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 05:00:55.161880 ignition[1053]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 4 05:00:55.161880 ignition[1053]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 4 05:00:55.161880 ignition[1053]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 4 05:00:55.161880 ignition[1053]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 4 05:00:55.161880 ignition[1053]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 4 05:00:55.161880 ignition[1053]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 4 05:00:55.161880 ignition[1053]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 4 05:00:55.161880 ignition[1053]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 4 05:00:55.161880 ignition[1053]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 4 05:00:55.161880 ignition[1053]: INFO : files: files passed Nov 4 05:00:55.161880 ignition[1053]: INFO : Ignition finished successfully Nov 4 05:00:55.144684 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 4 05:00:55.165421 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 4 05:00:55.170472 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 4 05:00:55.182104 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 4 05:00:55.195157 initrd-setup-root-after-ignition[1085]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 05:00:55.195157 initrd-setup-root-after-ignition[1085]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 4 05:00:55.182212 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 4 05:00:55.200579 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 05:00:55.199825 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 05:00:55.202121 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 4 05:00:55.204663 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 4 05:00:55.263482 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 4 05:00:55.263644 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 4 05:00:55.266167 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 4 05:00:55.267851 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 4 05:00:55.271446 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 4 05:00:55.273374 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 4 05:00:55.303387 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 05:00:55.306157 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 4 05:00:55.325257 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 05:00:55.325416 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 4 05:00:55.326439 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 05:00:55.328586 systemd[1]: Stopped target timers.target - Timer Units. Nov 4 05:00:55.330511 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 4 05:00:55.330659 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 05:00:55.333113 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 4 05:00:55.334429 systemd[1]: Stopped target basic.target - Basic System. Nov 4 05:00:55.336272 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 4 05:00:55.338331 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 05:00:55.340217 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 4 05:00:55.342030 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 4 05:00:55.344119 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 4 05:00:55.346139 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 05:00:55.348361 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 4 05:00:55.350361 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 4 05:00:55.352332 systemd[1]: Stopped target swap.target - Swaps. Nov 4 05:00:55.354265 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 4 05:00:55.354442 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 4 05:00:55.356917 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 4 05:00:55.358248 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 05:00:55.359951 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 4 05:00:55.362562 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 05:00:55.364093 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 4 05:00:55.364269 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 4 05:00:55.366783 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 4 05:00:55.366941 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 05:00:55.368155 systemd[1]: ignition-files.service: Deactivated successfully. Nov 4 05:00:55.368343 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 4 05:00:55.371296 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 4 05:00:55.377025 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 4 05:00:55.380732 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 4 05:00:55.380899 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 05:00:55.383098 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 4 05:00:55.384574 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 05:00:55.385680 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 4 05:00:55.385858 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 05:00:55.395579 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 4 05:00:55.395705 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 4 05:00:55.416751 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 4 05:00:55.421351 ignition[1110]: INFO : Ignition 2.22.0 Nov 4 05:00:55.421351 ignition[1110]: INFO : Stage: umount Nov 4 05:00:55.426419 ignition[1110]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 05:00:55.426419 ignition[1110]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 4 05:00:55.426419 ignition[1110]: INFO : umount: umount passed Nov 4 05:00:55.426419 ignition[1110]: INFO : Ignition finished successfully Nov 4 05:00:55.430702 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 4 05:00:55.430850 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 4 05:00:55.436046 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 4 05:00:55.436167 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 4 05:00:55.438929 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 4 05:00:55.438991 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 4 05:00:55.465176 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 4 05:00:55.465293 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 4 05:00:55.466206 systemd[1]: Stopped target network.target - Network. Nov 4 05:00:55.470122 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 4 05:00:55.471136 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 05:00:55.477364 systemd[1]: Stopped target paths.target - Path Units. Nov 4 05:00:55.478637 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 4 05:00:55.479026 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 05:00:55.480464 systemd[1]: Stopped target slices.target - Slice Units. Nov 4 05:00:55.482396 systemd[1]: Stopped target sockets.target - Socket Units. Nov 4 05:00:55.484605 systemd[1]: iscsid.socket: Deactivated successfully. Nov 4 05:00:55.484662 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 05:00:55.486465 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 4 05:00:55.486511 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 05:00:55.488303 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 4 05:00:55.488375 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 4 05:00:55.490052 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 4 05:00:55.490102 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 4 05:00:55.492266 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 4 05:00:55.493861 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 4 05:00:55.496765 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 4 05:00:55.496886 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 4 05:00:55.500647 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 4 05:00:55.500755 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 4 05:00:55.506653 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 4 05:00:55.506840 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 4 05:00:55.511659 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 4 05:00:55.511796 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 4 05:00:55.514774 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 4 05:00:55.516060 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 4 05:00:55.516111 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 4 05:00:55.518909 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 4 05:00:55.521997 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 4 05:00:55.522089 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 05:00:55.523117 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 05:00:55.523180 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 05:00:55.526338 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 4 05:00:55.526395 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 4 05:00:55.530363 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 05:00:55.551585 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 4 05:00:55.551812 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 05:00:55.555530 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 4 05:00:55.555617 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 4 05:00:55.557335 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 4 05:00:55.557386 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 05:00:55.558963 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 4 05:00:55.559025 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 4 05:00:55.561887 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 4 05:00:55.561944 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 4 05:00:55.563932 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 4 05:00:55.564004 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 05:00:55.567551 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 4 05:00:55.569470 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 4 05:00:55.569532 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 05:00:55.572915 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 4 05:00:55.572973 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 05:00:55.574919 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 4 05:00:55.575001 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 05:00:55.576834 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 4 05:00:55.576895 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 05:00:55.579088 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 05:00:55.579152 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 05:00:55.582379 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 4 05:00:55.582514 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 4 05:00:55.589446 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 4 05:00:55.589608 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 4 05:00:55.592454 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 4 05:00:55.595188 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 4 05:00:55.612911 systemd[1]: Switching root. Nov 4 05:00:55.648324 systemd-journald[303]: Received SIGTERM from PID 1 (systemd). Nov 4 05:00:55.648423 systemd-journald[303]: Journal stopped Nov 4 05:00:56.927310 kernel: SELinux: policy capability network_peer_controls=1 Nov 4 05:00:56.927346 kernel: SELinux: policy capability open_perms=1 Nov 4 05:00:56.927359 kernel: SELinux: policy capability extended_socket_class=1 Nov 4 05:00:56.927369 kernel: SELinux: policy capability always_check_network=0 Nov 4 05:00:56.927378 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 4 05:00:56.927391 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 4 05:00:56.927402 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 4 05:00:56.927411 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 4 05:00:56.927421 kernel: SELinux: policy capability userspace_initial_context=0 Nov 4 05:00:56.927431 kernel: audit: type=1403 audit(1762232455.783:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 4 05:00:56.927442 systemd[1]: Successfully loaded SELinux policy in 79.523ms. Nov 4 05:00:56.927456 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.014ms. Nov 4 05:00:56.927468 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 05:00:56.927478 systemd[1]: Detected virtualization kvm. Nov 4 05:00:56.927489 systemd[1]: Detected architecture x86-64. Nov 4 05:00:56.927502 systemd[1]: Detected first boot. Nov 4 05:00:56.927512 systemd[1]: Initializing machine ID from random generator. Nov 4 05:00:56.927523 zram_generator::config[1155]: No configuration found. Nov 4 05:00:56.927534 kernel: Guest personality initialized and is inactive Nov 4 05:00:56.927543 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 4 05:00:56.927556 kernel: Initialized host personality Nov 4 05:00:56.927566 kernel: NET: Registered PF_VSOCK protocol family Nov 4 05:00:56.927576 systemd[1]: Populated /etc with preset unit settings. Nov 4 05:00:56.927587 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 4 05:00:56.927598 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 4 05:00:56.927609 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 4 05:00:56.927622 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 4 05:00:56.927633 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 4 05:00:56.927643 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 4 05:00:56.927654 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 4 05:00:56.927667 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 4 05:00:56.927677 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 4 05:00:56.927690 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 4 05:00:56.927701 systemd[1]: Created slice user.slice - User and Session Slice. Nov 4 05:00:56.927712 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 05:00:56.927722 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 05:00:56.927733 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 4 05:00:56.927744 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 4 05:00:56.927755 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 4 05:00:56.927768 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 05:00:56.927781 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 4 05:00:56.927792 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 05:00:56.927803 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 05:00:56.927814 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 4 05:00:56.927825 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 4 05:00:56.927838 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 4 05:00:56.927849 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 4 05:00:56.927861 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 05:00:56.927872 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 05:00:56.927882 systemd[1]: Reached target slices.target - Slice Units. Nov 4 05:00:56.927893 systemd[1]: Reached target swap.target - Swaps. Nov 4 05:00:56.927906 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 4 05:00:56.927917 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 4 05:00:56.927928 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 4 05:00:56.927939 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 05:00:56.927950 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 05:00:56.927963 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 05:00:56.927974 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 4 05:00:56.927984 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 4 05:00:56.927995 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 4 05:00:56.928006 systemd[1]: Mounting media.mount - External Media Directory... Nov 4 05:00:56.928017 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 05:00:56.928030 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 4 05:00:56.928040 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 4 05:00:56.928051 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 4 05:00:56.928063 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 4 05:00:56.928074 systemd[1]: Reached target machines.target - Containers. Nov 4 05:00:56.928086 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 4 05:00:56.928099 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 05:00:56.928110 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 05:00:56.928121 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 4 05:00:56.928132 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 05:00:56.928143 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 05:00:56.928154 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 05:00:56.928164 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 4 05:00:56.928177 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 05:00:56.928188 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 4 05:00:56.928199 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 4 05:00:56.928210 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 4 05:00:56.928239 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 4 05:00:56.928261 systemd[1]: Stopped systemd-fsck-usr.service. Nov 4 05:00:56.928275 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 05:00:56.928291 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 05:00:56.928302 kernel: fuse: init (API version 7.41) Nov 4 05:00:56.928312 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 05:00:56.928326 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 05:00:56.928337 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 4 05:00:56.928348 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 4 05:00:56.928361 kernel: ACPI: bus type drm_connector registered Nov 4 05:00:56.928372 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 05:00:56.928383 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 05:00:56.928394 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 4 05:00:56.928405 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 4 05:00:56.928415 systemd[1]: Mounted media.mount - External Media Directory. Nov 4 05:00:56.928426 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 4 05:00:56.928460 systemd-journald[1236]: Collecting audit messages is disabled. Nov 4 05:00:56.928481 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 4 05:00:56.928493 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 4 05:00:56.928506 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 05:00:56.928517 systemd-journald[1236]: Journal started Nov 4 05:00:56.928537 systemd-journald[1236]: Runtime Journal (/run/log/journal/bfbef32257874a4ea0bd17ed5ec4b1eb) is 8M, max 78.1M, 70.1M free. Nov 4 05:00:56.458053 systemd[1]: Queued start job for default target multi-user.target. Nov 4 05:00:56.484247 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 4 05:00:56.931301 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 05:00:56.485312 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 4 05:00:56.933267 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 4 05:00:56.935387 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 4 05:00:56.935804 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 4 05:00:56.937177 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 05:00:56.937614 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 05:00:56.938897 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 05:00:56.939259 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 05:00:56.940826 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 05:00:56.941136 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 05:00:56.942562 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 4 05:00:56.942878 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 4 05:00:56.944268 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 05:00:56.944597 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 05:00:56.946211 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 05:00:56.947999 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 05:00:56.950425 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 4 05:00:56.951980 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 4 05:00:56.971058 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 05:00:56.973025 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 4 05:00:56.977358 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 4 05:00:56.982436 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 4 05:00:56.984122 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 4 05:00:56.984221 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 05:00:56.986094 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 4 05:00:56.987309 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 05:00:56.992105 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 4 05:00:56.997440 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 4 05:00:56.999578 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 05:00:57.001370 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 4 05:00:57.003352 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 05:00:57.006996 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 05:00:57.021740 systemd-journald[1236]: Time spent on flushing to /var/log/journal/bfbef32257874a4ea0bd17ed5ec4b1eb is 49.864ms for 985 entries. Nov 4 05:00:57.021740 systemd-journald[1236]: System Journal (/var/log/journal/bfbef32257874a4ea0bd17ed5ec4b1eb) is 8M, max 588.1M, 580.1M free. Nov 4 05:00:57.102419 systemd-journald[1236]: Received client request to flush runtime journal. Nov 4 05:00:57.102487 kernel: loop1: detected capacity change from 0 to 119080 Nov 4 05:00:57.011521 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 4 05:00:57.016402 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 05:00:57.020124 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 4 05:00:57.024373 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 4 05:00:57.039865 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 4 05:00:57.040910 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 4 05:00:57.045442 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 4 05:00:57.074503 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 05:00:57.091868 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 05:00:57.105282 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 4 05:00:57.107681 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 4 05:00:57.115556 kernel: loop2: detected capacity change from 0 to 111544 Nov 4 05:00:57.116588 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Nov 4 05:00:57.117516 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Nov 4 05:00:57.130368 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 05:00:57.134371 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 4 05:00:57.151327 kernel: loop3: detected capacity change from 0 to 8 Nov 4 05:00:57.175287 kernel: loop4: detected capacity change from 0 to 219144 Nov 4 05:00:57.179848 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 4 05:00:57.185461 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 05:00:57.188798 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 05:00:57.205400 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 4 05:00:57.219652 kernel: loop5: detected capacity change from 0 to 119080 Nov 4 05:00:57.224630 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. Nov 4 05:00:57.224650 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. Nov 4 05:00:57.230118 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 05:00:57.235695 kernel: loop6: detected capacity change from 0 to 111544 Nov 4 05:00:57.254254 kernel: loop7: detected capacity change from 0 to 8 Nov 4 05:00:57.264249 kernel: loop1: detected capacity change from 0 to 219144 Nov 4 05:00:57.279602 (sd-merge)[1306]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-akamai.raw'. Nov 4 05:00:57.282855 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 4 05:00:57.288524 (sd-merge)[1306]: Merged extensions into '/usr'. Nov 4 05:00:57.298378 systemd[1]: Reload requested from client PID 1280 ('systemd-sysext') (unit systemd-sysext.service)... Nov 4 05:00:57.298396 systemd[1]: Reloading... Nov 4 05:00:57.437953 systemd-resolved[1301]: Positive Trust Anchors: Nov 4 05:00:57.442274 systemd-resolved[1301]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 05:00:57.442345 systemd-resolved[1301]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 05:00:57.442418 systemd-resolved[1301]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 05:00:57.449375 zram_generator::config[1343]: No configuration found. Nov 4 05:00:57.456718 systemd-resolved[1301]: Defaulting to hostname 'linux'. Nov 4 05:00:57.638731 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 4 05:00:57.639366 systemd[1]: Reloading finished in 340 ms. Nov 4 05:00:57.674454 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 05:00:57.676099 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 4 05:00:57.677706 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 4 05:00:57.684526 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 05:00:57.700942 systemd[1]: Starting ensure-sysext.service... Nov 4 05:00:57.703373 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 05:00:57.708460 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 05:00:57.727982 systemd[1]: Reload requested from client PID 1383 ('systemctl') (unit ensure-sysext.service)... Nov 4 05:00:57.728137 systemd[1]: Reloading... Nov 4 05:00:57.750742 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 4 05:00:57.750780 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 4 05:00:57.753148 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 4 05:00:57.753751 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 4 05:00:57.753871 systemd-udevd[1385]: Using default interface naming scheme 'v257'. Nov 4 05:00:57.757644 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 4 05:00:57.757933 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Nov 4 05:00:57.758036 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Nov 4 05:00:57.769962 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 05:00:57.769986 systemd-tmpfiles[1384]: Skipping /boot Nov 4 05:00:57.794967 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 05:00:57.794984 systemd-tmpfiles[1384]: Skipping /boot Nov 4 05:00:57.863283 zram_generator::config[1424]: No configuration found. Nov 4 05:00:58.080275 kernel: mousedev: PS/2 mouse device common for all mice Nov 4 05:00:58.100257 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 4 05:00:58.105498 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 4 05:00:58.112984 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 4 05:00:58.152316 kernel: ACPI: button: Power Button [PWRF] Nov 4 05:00:58.155724 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 4 05:00:58.156126 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 4 05:00:58.159175 systemd[1]: Reloading finished in 430 ms. Nov 4 05:00:58.173668 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 05:00:58.196468 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 05:00:58.235108 systemd[1]: Finished ensure-sysext.service. Nov 4 05:00:58.257010 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 05:00:58.258606 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 05:00:58.262782 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 4 05:00:58.265475 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 05:00:58.266894 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 4 05:00:58.276661 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 05:00:58.289405 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 05:00:58.297441 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 05:00:58.306748 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 05:00:58.307948 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 05:00:58.310158 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 4 05:00:58.312432 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 05:00:58.315188 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 4 05:00:58.321878 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 05:00:58.333456 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 4 05:00:58.344615 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 4 05:00:58.346653 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 05:00:58.348967 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 05:00:58.357604 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 05:00:58.360028 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 05:00:58.361472 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 05:00:58.387403 kernel: EDAC MC: Ver: 3.0.0 Nov 4 05:00:58.410431 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 05:00:58.410793 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 05:00:58.421818 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 05:00:58.422364 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 05:00:58.441998 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 05:00:58.442175 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 05:00:58.453441 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 4 05:00:58.465727 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 05:00:58.479137 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 4 05:00:58.483447 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 4 05:00:58.518292 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 4 05:00:58.521859 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 05:00:58.529457 augenrules[1550]: No rules Nov 4 05:00:58.534000 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 05:00:58.535301 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 05:00:58.685298 systemd-networkd[1518]: lo: Link UP Nov 4 05:00:58.685312 systemd-networkd[1518]: lo: Gained carrier Nov 4 05:00:58.687301 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 05:00:58.688338 systemd[1]: Reached target network.target - Network. Nov 4 05:00:58.689167 systemd-networkd[1518]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 05:00:58.689179 systemd-networkd[1518]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 05:00:58.691703 systemd-networkd[1518]: eth0: Link UP Nov 4 05:00:58.692106 systemd-networkd[1518]: eth0: Gained carrier Nov 4 05:00:58.692128 systemd-networkd[1518]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 05:00:58.810267 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 4 05:00:58.814439 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 4 05:00:58.816764 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 4 05:00:58.824626 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 05:00:58.828883 systemd[1]: Reached target time-set.target - System Time Set. Nov 4 05:00:58.845673 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 4 05:00:58.984859 ldconfig[1504]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 4 05:00:58.988887 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 4 05:00:58.991699 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 4 05:00:59.018088 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 4 05:00:59.019293 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 05:00:59.020300 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 4 05:00:59.021296 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 4 05:00:59.022453 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 4 05:00:59.023555 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 4 05:00:59.024567 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 4 05:00:59.025520 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 4 05:00:59.026485 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 4 05:00:59.026532 systemd[1]: Reached target paths.target - Path Units. Nov 4 05:00:59.027354 systemd[1]: Reached target timers.target - Timer Units. Nov 4 05:00:59.029546 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 4 05:00:59.032286 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 4 05:00:59.035060 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 4 05:00:59.036161 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 4 05:00:59.037151 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 4 05:00:59.040680 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 4 05:00:59.041894 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 4 05:00:59.043438 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 4 05:00:59.045058 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 05:00:59.045894 systemd[1]: Reached target basic.target - Basic System. Nov 4 05:00:59.046762 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 4 05:00:59.046810 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 4 05:00:59.047966 systemd[1]: Starting containerd.service - containerd container runtime... Nov 4 05:00:59.052376 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 4 05:00:59.056146 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 4 05:00:59.059429 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 4 05:00:59.063398 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 4 05:00:59.066437 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 4 05:00:59.068295 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 4 05:00:59.073749 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 4 05:00:59.083449 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 4 05:00:59.087543 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 4 05:00:59.097470 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 4 05:00:59.101557 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 4 05:00:59.103345 jq[1575]: false Nov 4 05:00:59.115702 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 4 05:00:59.118277 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 4 05:00:59.118719 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 4 05:00:59.124216 systemd[1]: Starting update-engine.service - Update Engine... Nov 4 05:00:59.130556 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Refreshing passwd entry cache Nov 4 05:00:59.133154 oslogin_cache_refresh[1577]: Refreshing passwd entry cache Nov 4 05:00:59.133681 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 4 05:00:59.139345 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 4 05:00:59.141642 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 4 05:00:59.145581 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 4 05:00:59.155279 jq[1592]: true Nov 4 05:00:59.164297 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Failure getting users, quitting Nov 4 05:00:59.164297 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 05:00:59.164297 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Refreshing group entry cache Nov 4 05:00:59.164297 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Failure getting groups, quitting Nov 4 05:00:59.164297 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 05:00:59.163345 oslogin_cache_refresh[1577]: Failure getting users, quitting Nov 4 05:00:59.163364 oslogin_cache_refresh[1577]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 05:00:59.163409 oslogin_cache_refresh[1577]: Refreshing group entry cache Nov 4 05:00:59.163891 oslogin_cache_refresh[1577]: Failure getting groups, quitting Nov 4 05:00:59.163901 oslogin_cache_refresh[1577]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 05:00:59.169918 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 4 05:00:59.171116 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 4 05:00:59.173318 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 4 05:00:59.175017 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 4 05:00:59.200269 update_engine[1586]: I20251104 05:00:59.198825 1586 main.cc:92] Flatcar Update Engine starting Nov 4 05:00:59.204008 jq[1595]: true Nov 4 05:00:59.208131 extend-filesystems[1576]: Found /dev/sda6 Nov 4 05:00:59.216694 extend-filesystems[1576]: Found /dev/sda9 Nov 4 05:00:59.224869 extend-filesystems[1576]: Checking size of /dev/sda9 Nov 4 05:00:59.226481 coreos-metadata[1572]: Nov 04 05:00:59.226 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 4 05:00:59.257056 systemd[1]: motdgen.service: Deactivated successfully. Nov 4 05:00:59.257864 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 4 05:00:59.284814 tar[1594]: linux-amd64/LICENSE Nov 4 05:00:59.284814 tar[1594]: linux-amd64/helm Nov 4 05:00:59.290538 dbus-daemon[1573]: [system] SELinux support is enabled Nov 4 05:00:59.297480 extend-filesystems[1576]: Resized partition /dev/sda9 Nov 4 05:00:59.290756 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 4 05:00:59.296190 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 4 05:00:59.296219 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 4 05:00:59.298411 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 4 05:00:59.298427 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 4 05:00:59.311170 extend-filesystems[1634]: resize2fs 1.47.3 (8-Jul-2025) Nov 4 05:00:59.315289 systemd[1]: Started update-engine.service - Update Engine. Nov 4 05:00:59.317084 update_engine[1586]: I20251104 05:00:59.316815 1586 update_check_scheduler.cc:74] Next update check in 4m13s Nov 4 05:00:59.320795 systemd-logind[1584]: Watching system buttons on /dev/input/event2 (Power Button) Nov 4 05:00:59.321071 systemd-logind[1584]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 4 05:00:59.344621 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19377147 blocks Nov 4 05:00:59.359623 systemd-logind[1584]: New seat seat0. Nov 4 05:00:59.389628 bash[1646]: Updated "/home/core/.ssh/authorized_keys" Nov 4 05:00:59.391470 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 4 05:00:59.392612 systemd[1]: Started systemd-logind.service - User Login Management. Nov 4 05:00:59.395672 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 4 05:00:59.408958 systemd[1]: Starting sshkeys.service... Nov 4 05:00:59.415055 systemd-networkd[1518]: eth0: DHCPv4 address 172.237.150.130/24, gateway 172.237.150.1 acquired from 23.194.118.57 Nov 4 05:00:59.416140 dbus-daemon[1573]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1518 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 4 05:00:59.417284 systemd-timesyncd[1519]: Network configuration changed, trying to establish connection. Nov 4 05:00:59.422760 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 4 05:00:59.486107 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 4 05:00:59.490126 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 4 05:00:59.627516 sshd_keygen[1588]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 4 05:00:59.683484 systemd-timesyncd[1519]: Contacted time server 51.81.226.229:123 (0.flatcar.pool.ntp.org). Nov 4 05:00:59.683569 systemd-timesyncd[1519]: Initial clock synchronization to Tue 2025-11-04 05:00:59.742853 UTC. Nov 4 05:00:59.728005 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 4 05:00:59.743098 containerd[1605]: time="2025-11-04T05:00:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 4 05:00:59.748063 coreos-metadata[1655]: Nov 04 05:00:59.747 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 4 05:00:59.753289 kernel: EXT4-fs (sda9): resized filesystem to 19377147 Nov 4 05:00:59.753641 dbus-daemon[1573]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 4 05:00:59.766508 dbus-daemon[1573]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1651 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 4 05:00:59.774651 containerd[1605]: time="2025-11-04T05:00:59.769610310Z" level=info msg="starting containerd" revision=75cb2b7193e4e490e9fbdc236c0e811ccaba3376 version=v2.1.4 Nov 4 05:00:59.775547 extend-filesystems[1634]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 4 05:00:59.775547 extend-filesystems[1634]: old_desc_blocks = 1, new_desc_blocks = 10 Nov 4 05:00:59.775547 extend-filesystems[1634]: The filesystem on /dev/sda9 is now 19377147 (4k) blocks long. Nov 4 05:00:59.798377 extend-filesystems[1576]: Resized filesystem in /dev/sda9 Nov 4 05:00:59.780625 systemd[1]: Starting polkit.service - Authorization Manager... Nov 4 05:00:59.792751 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 4 05:00:59.793972 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 4 05:00:59.797150 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 4 05:00:59.819201 containerd[1605]: time="2025-11-04T05:00:59.819155360Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.99µs" Nov 4 05:00:59.823401 containerd[1605]: time="2025-11-04T05:00:59.819494900Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 4 05:00:59.823401 containerd[1605]: time="2025-11-04T05:00:59.819553340Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 4 05:00:59.823401 containerd[1605]: time="2025-11-04T05:00:59.819568290Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 4 05:00:59.823401 containerd[1605]: time="2025-11-04T05:00:59.819749100Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 4 05:00:59.823401 containerd[1605]: time="2025-11-04T05:00:59.819766180Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 05:00:59.823401 containerd[1605]: time="2025-11-04T05:00:59.819837050Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 05:00:59.823401 containerd[1605]: time="2025-11-04T05:00:59.819848130Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 05:00:59.823401 containerd[1605]: time="2025-11-04T05:00:59.822214800Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 05:00:59.823401 containerd[1605]: time="2025-11-04T05:00:59.822259870Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 05:00:59.823401 containerd[1605]: time="2025-11-04T05:00:59.822274300Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 05:00:59.823401 containerd[1605]: time="2025-11-04T05:00:59.822283810Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 4 05:00:59.823401 containerd[1605]: time="2025-11-04T05:00:59.822467160Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 4 05:00:59.823698 containerd[1605]: time="2025-11-04T05:00:59.822484380Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 4 05:00:59.823698 containerd[1605]: time="2025-11-04T05:00:59.822591250Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 4 05:00:59.823698 containerd[1605]: time="2025-11-04T05:00:59.822815610Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 05:00:59.823698 containerd[1605]: time="2025-11-04T05:00:59.822845880Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 05:00:59.823698 containerd[1605]: time="2025-11-04T05:00:59.822854970Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 4 05:00:59.823698 containerd[1605]: time="2025-11-04T05:00:59.822900200Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 4 05:00:59.823698 containerd[1605]: time="2025-11-04T05:00:59.823115550Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 4 05:00:59.823698 containerd[1605]: time="2025-11-04T05:00:59.823182810Z" level=info msg="metadata content store policy set" policy=shared Nov 4 05:00:59.828305 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 4 05:00:59.834713 containerd[1605]: time="2025-11-04T05:00:59.834656690Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 4 05:00:59.834887 containerd[1605]: time="2025-11-04T05:00:59.834744410Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 4 05:00:59.834887 containerd[1605]: time="2025-11-04T05:00:59.834871820Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 4 05:00:59.834887 containerd[1605]: time="2025-11-04T05:00:59.834885860Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 4 05:00:59.834968 containerd[1605]: time="2025-11-04T05:00:59.834901680Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 4 05:00:59.834968 containerd[1605]: time="2025-11-04T05:00:59.834915440Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 4 05:00:59.834968 containerd[1605]: time="2025-11-04T05:00:59.834928720Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 4 05:00:59.834968 containerd[1605]: time="2025-11-04T05:00:59.834939970Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 4 05:00:59.834968 containerd[1605]: time="2025-11-04T05:00:59.834954490Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 4 05:00:59.834968 containerd[1605]: time="2025-11-04T05:00:59.834971050Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 4 05:00:59.835075 containerd[1605]: time="2025-11-04T05:00:59.834985380Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 4 05:00:59.835075 containerd[1605]: time="2025-11-04T05:00:59.834996970Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 4 05:00:59.835075 containerd[1605]: time="2025-11-04T05:00:59.835007650Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 4 05:00:59.835075 containerd[1605]: time="2025-11-04T05:00:59.835021050Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 4 05:00:59.835454 containerd[1605]: time="2025-11-04T05:00:59.835157500Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 4 05:00:59.835454 containerd[1605]: time="2025-11-04T05:00:59.835185770Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 4 05:00:59.835454 containerd[1605]: time="2025-11-04T05:00:59.835201750Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 4 05:00:59.839314 containerd[1605]: time="2025-11-04T05:00:59.835219900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 4 05:00:59.839428 containerd[1605]: time="2025-11-04T05:00:59.839312740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 4 05:00:59.839463 containerd[1605]: time="2025-11-04T05:00:59.839427780Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 4 05:00:59.839517 containerd[1605]: time="2025-11-04T05:00:59.839479200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 4 05:00:59.839517 containerd[1605]: time="2025-11-04T05:00:59.839510740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 4 05:00:59.839571 containerd[1605]: time="2025-11-04T05:00:59.839527550Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 4 05:00:59.839593 containerd[1605]: time="2025-11-04T05:00:59.839573180Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 4 05:00:59.839614 containerd[1605]: time="2025-11-04T05:00:59.839591750Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 4 05:00:59.839936 containerd[1605]: time="2025-11-04T05:00:59.839653660Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 4 05:00:59.839936 containerd[1605]: time="2025-11-04T05:00:59.839750160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 4 05:00:59.839936 containerd[1605]: time="2025-11-04T05:00:59.839769120Z" level=info msg="Start snapshots syncer" Nov 4 05:00:59.839936 containerd[1605]: time="2025-11-04T05:00:59.839851620Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 4 05:00:59.846583 containerd[1605]: time="2025-11-04T05:00:59.846496490Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 4 05:00:59.846905 containerd[1605]: time="2025-11-04T05:00:59.846633780Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 4 05:00:59.846905 containerd[1605]: time="2025-11-04T05:00:59.846751940Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 4 05:00:59.847245 containerd[1605]: time="2025-11-04T05:00:59.847016610Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 4 05:00:59.847245 containerd[1605]: time="2025-11-04T05:00:59.847091340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 4 05:00:59.847245 containerd[1605]: time="2025-11-04T05:00:59.847109590Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 4 05:00:59.847245 containerd[1605]: time="2025-11-04T05:00:59.847121110Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 4 05:00:59.847245 containerd[1605]: time="2025-11-04T05:00:59.847132260Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 4 05:00:59.847245 containerd[1605]: time="2025-11-04T05:00:59.847181330Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 4 05:00:59.847245 containerd[1605]: time="2025-11-04T05:00:59.847192380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 4 05:00:59.847245 containerd[1605]: time="2025-11-04T05:00:59.847203710Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 4 05:00:59.848500 containerd[1605]: time="2025-11-04T05:00:59.848286580Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 4 05:00:59.848500 containerd[1605]: time="2025-11-04T05:00:59.848368890Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 05:00:59.848500 containerd[1605]: time="2025-11-04T05:00:59.848389460Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 05:00:59.849520 containerd[1605]: time="2025-11-04T05:00:59.848398450Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 05:00:59.849594 containerd[1605]: time="2025-11-04T05:00:59.849551170Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 05:00:59.849905 containerd[1605]: time="2025-11-04T05:00:59.849587660Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 4 05:00:59.849942 containerd[1605]: time="2025-11-04T05:00:59.849902280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 4 05:00:59.849942 containerd[1605]: time="2025-11-04T05:00:59.849920020Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 4 05:00:59.852395 containerd[1605]: time="2025-11-04T05:00:59.852291190Z" level=info msg="runtime interface created" Nov 4 05:00:59.852395 containerd[1605]: time="2025-11-04T05:00:59.852310620Z" level=info msg="created NRI interface" Nov 4 05:00:59.856323 containerd[1605]: time="2025-11-04T05:00:59.856285520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 4 05:00:59.856323 containerd[1605]: time="2025-11-04T05:00:59.856321020Z" level=info msg="Connect containerd service" Nov 4 05:00:59.856512 containerd[1605]: time="2025-11-04T05:00:59.856403820Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 4 05:00:59.858553 containerd[1605]: time="2025-11-04T05:00:59.858513250Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 05:00:59.885658 coreos-metadata[1655]: Nov 04 05:00:59.885 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Nov 4 05:00:59.887037 systemd[1]: issuegen.service: Deactivated successfully. Nov 4 05:00:59.887360 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 4 05:00:59.897385 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 4 05:00:59.897482 tar[1594]: linux-amd64/README.md Nov 4 05:00:59.912528 locksmithd[1639]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 4 05:00:59.927897 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 4 05:00:59.934683 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 4 05:00:59.943667 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 4 05:00:59.949871 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 4 05:00:59.951162 systemd[1]: Reached target getty.target - Login Prompts. Nov 4 05:00:59.967707 polkitd[1670]: Started polkitd version 126 Nov 4 05:00:59.974715 polkitd[1670]: Loading rules from directory /etc/polkit-1/rules.d Nov 4 05:00:59.975065 polkitd[1670]: Loading rules from directory /run/polkit-1/rules.d Nov 4 05:00:59.975169 polkitd[1670]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 4 05:00:59.975515 polkitd[1670]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 4 05:00:59.975585 polkitd[1670]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 4 05:00:59.975665 polkitd[1670]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 4 05:00:59.976337 polkitd[1670]: Finished loading, compiling and executing 2 rules Nov 4 05:00:59.976710 systemd[1]: Started polkit.service - Authorization Manager. Nov 4 05:00:59.978582 dbus-daemon[1573]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 4 05:00:59.979276 polkitd[1670]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 4 05:00:59.994075 systemd-hostnamed[1651]: Hostname set to <172-237-150-130> (transient) Nov 4 05:00:59.994137 systemd-resolved[1301]: System hostname changed to '172-237-150-130'. Nov 4 05:01:00.009769 containerd[1605]: time="2025-11-04T05:01:00.009703030Z" level=info msg="Start subscribing containerd event" Nov 4 05:01:00.009868 containerd[1605]: time="2025-11-04T05:01:00.009838722Z" level=info msg="Start recovering state" Nov 4 05:01:00.010184 containerd[1605]: time="2025-11-04T05:01:00.010149518Z" level=info msg="Start event monitor" Nov 4 05:01:00.010437 containerd[1605]: time="2025-11-04T05:01:00.010388645Z" level=info msg="Start cni network conf syncer for default" Nov 4 05:01:00.010437 containerd[1605]: time="2025-11-04T05:01:00.010404179Z" level=info msg="Start streaming server" Nov 4 05:01:00.010949 containerd[1605]: time="2025-11-04T05:01:00.010414667Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 4 05:01:00.011084 containerd[1605]: time="2025-11-04T05:01:00.011017188Z" level=info msg="runtime interface starting up..." Nov 4 05:01:00.011084 containerd[1605]: time="2025-11-04T05:01:00.011084575Z" level=info msg="starting plugins..." Nov 4 05:01:00.011187 containerd[1605]: time="2025-11-04T05:01:00.011151540Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 4 05:01:00.011392 containerd[1605]: time="2025-11-04T05:01:00.011030990Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 4 05:01:00.011583 containerd[1605]: time="2025-11-04T05:01:00.011555464Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 4 05:01:00.012448 containerd[1605]: time="2025-11-04T05:01:00.011772316Z" level=info msg="containerd successfully booted in 0.269201s" Nov 4 05:01:00.011901 systemd[1]: Started containerd.service - containerd container runtime. Nov 4 05:01:00.020690 coreos-metadata[1655]: Nov 04 05:01:00.020 INFO Fetch successful Nov 4 05:01:00.043484 update-ssh-keys[1713]: Updated "/home/core/.ssh/authorized_keys" Nov 4 05:01:00.044867 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 4 05:01:00.048133 systemd[1]: Finished sshkeys.service. Nov 4 05:01:00.095549 systemd-networkd[1518]: eth0: Gained IPv6LL Nov 4 05:01:00.097772 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 4 05:01:00.100151 systemd[1]: Reached target network-online.target - Network is Online. Nov 4 05:01:00.103377 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 05:01:00.105469 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 4 05:01:00.153982 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 4 05:01:00.250519 coreos-metadata[1572]: Nov 04 05:01:00.250 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Nov 4 05:01:00.348968 coreos-metadata[1572]: Nov 04 05:01:00.348 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Nov 4 05:01:00.533433 coreos-metadata[1572]: Nov 04 05:01:00.532 INFO Fetch successful Nov 4 05:01:00.533635 coreos-metadata[1572]: Nov 04 05:01:00.533 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Nov 4 05:01:00.815132 coreos-metadata[1572]: Nov 04 05:01:00.814 INFO Fetch successful Nov 4 05:01:00.954779 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 4 05:01:00.957637 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 4 05:01:01.091211 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 05:01:01.094634 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 4 05:01:01.097390 systemd[1]: Startup finished in 2.914s (kernel) + 6.021s (initrd) + 5.392s (userspace) = 14.329s. Nov 4 05:01:01.102778 (kubelet)[1753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 05:01:01.585165 kubelet[1753]: E1104 05:01:01.585014 1753 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 05:01:01.588929 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 05:01:01.589164 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 05:01:01.589680 systemd[1]: kubelet.service: Consumed 865ms CPU time, 257.2M memory peak. Nov 4 05:01:02.989920 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 4 05:01:02.991091 systemd[1]: Started sshd@0-172.237.150.130:22-139.178.89.65:53152.service - OpenSSH per-connection server daemon (139.178.89.65:53152). Nov 4 05:01:03.318927 sshd[1766]: Accepted publickey for core from 139.178.89.65 port 53152 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:01:03.321511 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:01:03.330993 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 4 05:01:03.332797 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 4 05:01:03.357148 systemd-logind[1584]: New session 1 of user core. Nov 4 05:01:03.371571 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 4 05:01:03.376054 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 4 05:01:03.391654 (systemd)[1771]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 4 05:01:03.396034 systemd-logind[1584]: New session c1 of user core. Nov 4 05:01:03.539791 systemd[1771]: Queued start job for default target default.target. Nov 4 05:01:03.555184 systemd[1771]: Created slice app.slice - User Application Slice. Nov 4 05:01:03.555257 systemd[1771]: Reached target paths.target - Paths. Nov 4 05:01:03.555325 systemd[1771]: Reached target timers.target - Timers. Nov 4 05:01:03.557158 systemd[1771]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 4 05:01:03.584950 systemd[1771]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 4 05:01:03.585031 systemd[1771]: Reached target sockets.target - Sockets. Nov 4 05:01:03.585078 systemd[1771]: Reached target basic.target - Basic System. Nov 4 05:01:03.585125 systemd[1771]: Reached target default.target - Main User Target. Nov 4 05:01:03.585163 systemd[1771]: Startup finished in 178ms. Nov 4 05:01:03.585513 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 4 05:01:03.595501 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 4 05:01:03.779637 systemd[1]: Started sshd@1-172.237.150.130:22-139.178.89.65:53154.service - OpenSSH per-connection server daemon (139.178.89.65:53154). Nov 4 05:01:04.076343 sshd[1782]: Accepted publickey for core from 139.178.89.65 port 53154 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:01:04.077998 sshd-session[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:01:04.084505 systemd-logind[1584]: New session 2 of user core. Nov 4 05:01:04.091409 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 4 05:01:04.234745 sshd[1785]: Connection closed by 139.178.89.65 port 53154 Nov 4 05:01:04.235448 sshd-session[1782]: pam_unix(sshd:session): session closed for user core Nov 4 05:01:04.241474 systemd[1]: sshd@1-172.237.150.130:22-139.178.89.65:53154.service: Deactivated successfully. Nov 4 05:01:04.244137 systemd[1]: session-2.scope: Deactivated successfully. Nov 4 05:01:04.245556 systemd-logind[1584]: Session 2 logged out. Waiting for processes to exit. Nov 4 05:01:04.247077 systemd-logind[1584]: Removed session 2. Nov 4 05:01:04.300094 systemd[1]: Started sshd@2-172.237.150.130:22-139.178.89.65:53158.service - OpenSSH per-connection server daemon (139.178.89.65:53158). Nov 4 05:01:04.611619 sshd[1791]: Accepted publickey for core from 139.178.89.65 port 53158 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:01:04.613404 sshd-session[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:01:04.618450 systemd-logind[1584]: New session 3 of user core. Nov 4 05:01:04.623399 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 4 05:01:04.769657 sshd[1794]: Connection closed by 139.178.89.65 port 53158 Nov 4 05:01:04.770378 sshd-session[1791]: pam_unix(sshd:session): session closed for user core Nov 4 05:01:04.775885 systemd[1]: sshd@2-172.237.150.130:22-139.178.89.65:53158.service: Deactivated successfully. Nov 4 05:01:04.778579 systemd[1]: session-3.scope: Deactivated successfully. Nov 4 05:01:04.779660 systemd-logind[1584]: Session 3 logged out. Waiting for processes to exit. Nov 4 05:01:04.781644 systemd-logind[1584]: Removed session 3. Nov 4 05:01:04.830387 systemd[1]: Started sshd@3-172.237.150.130:22-139.178.89.65:53162.service - OpenSSH per-connection server daemon (139.178.89.65:53162). Nov 4 05:01:05.141131 sshd[1800]: Accepted publickey for core from 139.178.89.65 port 53162 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:01:05.142860 sshd-session[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:01:05.149697 systemd-logind[1584]: New session 4 of user core. Nov 4 05:01:05.157391 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 4 05:01:05.301082 sshd[1803]: Connection closed by 139.178.89.65 port 53162 Nov 4 05:01:05.301726 sshd-session[1800]: pam_unix(sshd:session): session closed for user core Nov 4 05:01:05.306621 systemd-logind[1584]: Session 4 logged out. Waiting for processes to exit. Nov 4 05:01:05.307620 systemd[1]: sshd@3-172.237.150.130:22-139.178.89.65:53162.service: Deactivated successfully. Nov 4 05:01:05.309824 systemd[1]: session-4.scope: Deactivated successfully. Nov 4 05:01:05.311509 systemd-logind[1584]: Removed session 4. Nov 4 05:01:05.367333 systemd[1]: Started sshd@4-172.237.150.130:22-139.178.89.65:53172.service - OpenSSH per-connection server daemon (139.178.89.65:53172). Nov 4 05:01:05.695517 sshd[1809]: Accepted publickey for core from 139.178.89.65 port 53172 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:01:05.697109 sshd-session[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:01:05.702763 systemd-logind[1584]: New session 5 of user core. Nov 4 05:01:05.707397 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 4 05:01:05.859734 sudo[1813]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 4 05:01:05.860110 sudo[1813]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 05:01:05.882587 sudo[1813]: pam_unix(sudo:session): session closed for user root Nov 4 05:01:05.941596 sshd[1812]: Connection closed by 139.178.89.65 port 53172 Nov 4 05:01:05.942136 sshd-session[1809]: pam_unix(sshd:session): session closed for user core Nov 4 05:01:05.947314 systemd[1]: sshd@4-172.237.150.130:22-139.178.89.65:53172.service: Deactivated successfully. Nov 4 05:01:05.949574 systemd[1]: session-5.scope: Deactivated successfully. Nov 4 05:01:05.950859 systemd-logind[1584]: Session 5 logged out. Waiting for processes to exit. Nov 4 05:01:05.952473 systemd-logind[1584]: Removed session 5. Nov 4 05:01:06.009740 systemd[1]: Started sshd@5-172.237.150.130:22-139.178.89.65:53182.service - OpenSSH per-connection server daemon (139.178.89.65:53182). Nov 4 05:01:06.315141 sshd[1819]: Accepted publickey for core from 139.178.89.65 port 53182 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:01:06.316899 sshd-session[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:01:06.327165 systemd-logind[1584]: New session 6 of user core. Nov 4 05:01:06.337395 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 4 05:01:06.430045 sudo[1824]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 4 05:01:06.430429 sudo[1824]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 05:01:06.437691 sudo[1824]: pam_unix(sudo:session): session closed for user root Nov 4 05:01:06.444857 sudo[1823]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 4 05:01:06.445190 sudo[1823]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 05:01:06.456350 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 05:01:06.495952 augenrules[1846]: No rules Nov 4 05:01:06.497418 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 05:01:06.497757 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 05:01:06.498749 sudo[1823]: pam_unix(sudo:session): session closed for user root Nov 4 05:01:06.552563 sshd[1822]: Connection closed by 139.178.89.65 port 53182 Nov 4 05:01:06.553027 sshd-session[1819]: pam_unix(sshd:session): session closed for user core Nov 4 05:01:06.557789 systemd[1]: sshd@5-172.237.150.130:22-139.178.89.65:53182.service: Deactivated successfully. Nov 4 05:01:06.560051 systemd[1]: session-6.scope: Deactivated successfully. Nov 4 05:01:06.560892 systemd-logind[1584]: Session 6 logged out. Waiting for processes to exit. Nov 4 05:01:06.562716 systemd-logind[1584]: Removed session 6. Nov 4 05:01:06.610813 systemd[1]: Started sshd@6-172.237.150.130:22-139.178.89.65:55860.service - OpenSSH per-connection server daemon (139.178.89.65:55860). Nov 4 05:01:06.908349 sshd[1855]: Accepted publickey for core from 139.178.89.65 port 55860 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:01:06.909862 sshd-session[1855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:01:06.919844 systemd-logind[1584]: New session 7 of user core. Nov 4 05:01:06.926381 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 4 05:01:07.016829 sudo[1859]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 4 05:01:07.017163 sudo[1859]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 05:01:07.377446 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 4 05:01:07.393587 (dockerd)[1876]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 4 05:01:07.638152 dockerd[1876]: time="2025-11-04T05:01:07.637741241Z" level=info msg="Starting up" Nov 4 05:01:07.640698 dockerd[1876]: time="2025-11-04T05:01:07.640272156Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 4 05:01:07.652896 dockerd[1876]: time="2025-11-04T05:01:07.652786310Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 4 05:01:07.668409 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1393587848-merged.mount: Deactivated successfully. Nov 4 05:01:07.696277 dockerd[1876]: time="2025-11-04T05:01:07.696062220Z" level=info msg="Loading containers: start." Nov 4 05:01:07.709267 kernel: Initializing XFRM netlink socket Nov 4 05:01:07.986525 systemd-networkd[1518]: docker0: Link UP Nov 4 05:01:07.991482 dockerd[1876]: time="2025-11-04T05:01:07.991428900Z" level=info msg="Loading containers: done." Nov 4 05:01:08.005885 dockerd[1876]: time="2025-11-04T05:01:08.005843919Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 4 05:01:08.006057 dockerd[1876]: time="2025-11-04T05:01:08.005913226Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 4 05:01:08.006057 dockerd[1876]: time="2025-11-04T05:01:08.006019768Z" level=info msg="Initializing buildkit" Nov 4 05:01:08.028658 dockerd[1876]: time="2025-11-04T05:01:08.028624843Z" level=info msg="Completed buildkit initialization" Nov 4 05:01:08.034595 dockerd[1876]: time="2025-11-04T05:01:08.034563921Z" level=info msg="Daemon has completed initialization" Nov 4 05:01:08.034833 dockerd[1876]: time="2025-11-04T05:01:08.034668438Z" level=info msg="API listen on /run/docker.sock" Nov 4 05:01:08.034853 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 4 05:01:08.666628 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2658782961-merged.mount: Deactivated successfully. Nov 4 05:01:08.870460 containerd[1605]: time="2025-11-04T05:01:08.870388736Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 4 05:01:09.544360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2493919512.mount: Deactivated successfully. Nov 4 05:01:10.474275 containerd[1605]: time="2025-11-04T05:01:10.474190027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:01:10.475272 containerd[1605]: time="2025-11-04T05:01:10.475190442Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=25393225" Nov 4 05:01:10.475943 containerd[1605]: time="2025-11-04T05:01:10.475898786Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:01:10.479549 containerd[1605]: time="2025-11-04T05:01:10.478377880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:01:10.479549 containerd[1605]: time="2025-11-04T05:01:10.479354528Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.608913834s" Nov 4 05:01:10.479549 containerd[1605]: time="2025-11-04T05:01:10.479382623Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 4 05:01:10.480296 containerd[1605]: time="2025-11-04T05:01:10.480217114Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 4 05:01:11.768826 containerd[1605]: time="2025-11-04T05:01:11.768758385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:01:11.769888 containerd[1605]: time="2025-11-04T05:01:11.769667640Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21151604" Nov 4 05:01:11.770857 containerd[1605]: time="2025-11-04T05:01:11.770799826Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:01:11.772765 containerd[1605]: time="2025-11-04T05:01:11.772722184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:01:11.774331 containerd[1605]: time="2025-11-04T05:01:11.773684750Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.293258038s" Nov 4 05:01:11.774331 containerd[1605]: time="2025-11-04T05:01:11.773716774Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 4 05:01:11.774719 containerd[1605]: time="2025-11-04T05:01:11.774678219Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 4 05:01:11.839726 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 4 05:01:11.841900 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 05:01:12.040639 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 05:01:12.052527 (kubelet)[2157]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 05:01:12.096889 kubelet[2157]: E1104 05:01:12.096805 2157 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 05:01:12.103211 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 05:01:12.103635 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 05:01:12.104076 systemd[1]: kubelet.service: Consumed 202ms CPU time, 108.6M memory peak. Nov 4 05:01:12.788386 containerd[1605]: time="2025-11-04T05:01:12.788335990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:01:12.789365 containerd[1605]: time="2025-11-04T05:01:12.789102888Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=0" Nov 4 05:01:12.789918 containerd[1605]: time="2025-11-04T05:01:12.789890426Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:01:12.792263 containerd[1605]: time="2025-11-04T05:01:12.792199883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:01:12.793141 containerd[1605]: time="2025-11-04T05:01:12.793110065Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.018397319s" Nov 4 05:01:12.793178 containerd[1605]: time="2025-11-04T05:01:12.793143886Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 4 05:01:12.793867 containerd[1605]: time="2025-11-04T05:01:12.793827659Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 4 05:01:14.010601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2647495088.mount: Deactivated successfully. Nov 4 05:01:14.281030 containerd[1605]: time="2025-11-04T05:01:14.280855108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:01:14.281973 containerd[1605]: time="2025-11-04T05:01:14.281878671Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=0" Nov 4 05:01:14.283390 containerd[1605]: time="2025-11-04T05:01:14.282460318Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:01:14.283875 containerd[1605]: time="2025-11-04T05:01:14.283844895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:01:14.284628 containerd[1605]: time="2025-11-04T05:01:14.284598509Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.49073552s" Nov 4 05:01:14.284703 containerd[1605]: time="2025-11-04T05:01:14.284688882Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 4 05:01:14.285555 containerd[1605]: time="2025-11-04T05:01:14.285528275Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 4 05:01:14.955849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2100488841.mount: Deactivated successfully. Nov 4 05:01:15.640591 containerd[1605]: time="2025-11-04T05:01:15.640520892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:01:15.641702 containerd[1605]: time="2025-11-04T05:01:15.641506080Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=21568511" Nov 4 05:01:15.642455 containerd[1605]: time="2025-11-04T05:01:15.642421779Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:01:15.644823 containerd[1605]: time="2025-11-04T05:01:15.644791877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:01:15.645810 containerd[1605]: time="2025-11-04T05:01:15.645779737Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.36022137s" Nov 4 05:01:15.645858 containerd[1605]: time="2025-11-04T05:01:15.645813722Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 4 05:01:15.646899 containerd[1605]: time="2025-11-04T05:01:15.646860161Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 4 05:01:16.240835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1348615562.mount: Deactivated successfully. Nov 4 05:01:16.245163 containerd[1605]: time="2025-11-04T05:01:16.245116434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:01:16.246047 containerd[1605]: time="2025-11-04T05:01:16.246019056Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Nov 4 05:01:16.247817 containerd[1605]: time="2025-11-04T05:01:16.246649080Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:01:16.248488 containerd[1605]: time="2025-11-04T05:01:16.248456026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:01:16.249356 containerd[1605]: time="2025-11-04T05:01:16.249326920Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 602.434367ms" Nov 4 05:01:16.249436 containerd[1605]: time="2025-11-04T05:01:16.249418741Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 4 05:01:16.250089 containerd[1605]: time="2025-11-04T05:01:16.250059453Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 4 05:01:18.729466 containerd[1605]: time="2025-11-04T05:01:18.728965178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:01:18.730791 containerd[1605]: time="2025-11-04T05:01:18.730167066Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=61186606" Nov 4 05:01:18.731388 containerd[1605]: time="2025-11-04T05:01:18.731298837Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:01:18.733948 containerd[1605]: time="2025-11-04T05:01:18.733913294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:01:18.735756 containerd[1605]: time="2025-11-04T05:01:18.735211217Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.485117556s" Nov 4 05:01:18.735756 containerd[1605]: time="2025-11-04T05:01:18.735291011Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 4 05:01:20.919831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 05:01:20.920446 systemd[1]: kubelet.service: Consumed 202ms CPU time, 108.6M memory peak. Nov 4 05:01:20.922620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 05:01:20.950891 systemd[1]: Reload requested from client PID 2302 ('systemctl') (unit session-7.scope)... Nov 4 05:01:20.950981 systemd[1]: Reloading... Nov 4 05:01:21.085256 zram_generator::config[2347]: No configuration found. Nov 4 05:01:21.306758 systemd[1]: Reloading finished in 355 ms. Nov 4 05:01:21.372946 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 4 05:01:21.373100 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 4 05:01:21.373612 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 05:01:21.373668 systemd[1]: kubelet.service: Consumed 143ms CPU time, 98.3M memory peak. Nov 4 05:01:21.375917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 05:01:21.535897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 05:01:21.540700 (kubelet)[2401]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 05:01:21.581773 kubelet[2401]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 05:01:21.581773 kubelet[2401]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 05:01:21.581773 kubelet[2401]: I1104 05:01:21.581485 2401 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 05:01:21.665525 kubelet[2401]: I1104 05:01:21.665134 2401 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 4 05:01:21.665525 kubelet[2401]: I1104 05:01:21.665153 2401 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 05:01:21.665525 kubelet[2401]: I1104 05:01:21.665184 2401 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 4 05:01:21.665525 kubelet[2401]: I1104 05:01:21.665190 2401 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 05:01:21.665664 kubelet[2401]: I1104 05:01:21.665493 2401 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 05:01:21.670279 kubelet[2401]: E1104 05:01:21.670251 2401 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.237.150.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.237.150.130:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 4 05:01:21.672001 kubelet[2401]: I1104 05:01:21.671985 2401 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 05:01:21.675721 kubelet[2401]: I1104 05:01:21.675599 2401 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 05:01:21.679622 kubelet[2401]: I1104 05:01:21.679594 2401 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 4 05:01:21.679907 kubelet[2401]: I1104 05:01:21.679861 2401 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 05:01:21.680030 kubelet[2401]: I1104 05:01:21.679894 2401 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-150-130","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 05:01:21.680030 kubelet[2401]: I1104 05:01:21.680023 2401 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 05:01:21.680030 kubelet[2401]: I1104 05:01:21.680034 2401 container_manager_linux.go:306] "Creating device plugin manager" Nov 4 05:01:21.680191 kubelet[2401]: I1104 05:01:21.680113 2401 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 4 05:01:21.682243 kubelet[2401]: I1104 05:01:21.682209 2401 state_mem.go:36] "Initialized new in-memory state store" Nov 4 05:01:21.682452 kubelet[2401]: I1104 05:01:21.682438 2401 kubelet.go:475] "Attempting to sync node with API server" Nov 4 05:01:21.682493 kubelet[2401]: I1104 05:01:21.682462 2401 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 05:01:21.682493 kubelet[2401]: I1104 05:01:21.682483 2401 kubelet.go:387] "Adding apiserver pod source" Nov 4 05:01:21.682535 kubelet[2401]: I1104 05:01:21.682500 2401 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 05:01:21.685479 kubelet[2401]: E1104 05:01:21.685453 2401 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.237.150.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.237.150.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 05:01:21.686269 kubelet[2401]: E1104 05:01:21.685705 2401 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.237.150.130:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-150-130&limit=500&resourceVersion=0\": dial tcp 172.237.150.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 05:01:21.686269 kubelet[2401]: I1104 05:01:21.685936 2401 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 4 05:01:21.686342 kubelet[2401]: I1104 05:01:21.686296 2401 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 05:01:21.686342 kubelet[2401]: I1104 05:01:21.686319 2401 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 4 05:01:21.686385 kubelet[2401]: W1104 05:01:21.686359 2401 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 4 05:01:21.690441 kubelet[2401]: I1104 05:01:21.690412 2401 server.go:1262] "Started kubelet" Nov 4 05:01:21.691694 kubelet[2401]: I1104 05:01:21.691584 2401 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 05:01:21.697746 kubelet[2401]: E1104 05:01:21.696595 2401 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.237.150.130:6443/api/v1/namespaces/default/events\": dial tcp 172.237.150.130:6443: connect: connection refused" event="&Event{ObjectMeta:{172-237-150-130.1874b51afacf53f2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-237-150-130,UID:172-237-150-130,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-237-150-130,},FirstTimestamp:2025-11-04 05:01:21.690375154 +0000 UTC m=+0.144722062,LastTimestamp:2025-11-04 05:01:21.690375154 +0000 UTC m=+0.144722062,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-237-150-130,}" Nov 4 05:01:21.700170 kubelet[2401]: E1104 05:01:21.699430 2401 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 05:01:21.700170 kubelet[2401]: I1104 05:01:21.699467 2401 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 05:01:21.700841 kubelet[2401]: I1104 05:01:21.700827 2401 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 4 05:01:21.701108 kubelet[2401]: E1104 05:01:21.701090 2401 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-237-150-130\" not found" Nov 4 05:01:21.701277 kubelet[2401]: I1104 05:01:21.701138 2401 server.go:310] "Adding debug handlers to kubelet server" Nov 4 05:01:21.701837 kubelet[2401]: I1104 05:01:21.701803 2401 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 05:01:21.701879 kubelet[2401]: I1104 05:01:21.701855 2401 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 4 05:01:21.702201 kubelet[2401]: I1104 05:01:21.702177 2401 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 05:01:21.705901 kubelet[2401]: I1104 05:01:21.705876 2401 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 05:01:21.707330 kubelet[2401]: I1104 05:01:21.707316 2401 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 4 05:01:21.707432 kubelet[2401]: I1104 05:01:21.707421 2401 reconciler.go:29] "Reconciler: start to sync state" Nov 4 05:01:21.707946 kubelet[2401]: E1104 05:01:21.707923 2401 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.237.150.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.237.150.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 05:01:21.708030 kubelet[2401]: E1104 05:01:21.708004 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.150.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-150-130?timeout=10s\": dial tcp 172.237.150.130:6443: connect: connection refused" interval="200ms" Nov 4 05:01:21.709160 kubelet[2401]: I1104 05:01:21.709135 2401 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 05:01:21.711395 kubelet[2401]: I1104 05:01:21.711370 2401 factory.go:223] Registration of the containerd container factory successfully Nov 4 05:01:21.711395 kubelet[2401]: I1104 05:01:21.711391 2401 factory.go:223] Registration of the systemd container factory successfully Nov 4 05:01:21.731750 kubelet[2401]: I1104 05:01:21.731624 2401 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 4 05:01:21.735555 kubelet[2401]: I1104 05:01:21.735539 2401 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 4 05:01:21.735659 kubelet[2401]: I1104 05:01:21.735647 2401 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 4 05:01:21.735765 kubelet[2401]: I1104 05:01:21.735753 2401 kubelet.go:2427] "Starting kubelet main sync loop" Nov 4 05:01:21.737001 kubelet[2401]: E1104 05:01:21.736966 2401 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 05:01:21.738179 kubelet[2401]: E1104 05:01:21.738158 2401 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.237.150.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.237.150.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 05:01:21.739665 kubelet[2401]: I1104 05:01:21.739631 2401 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 05:01:21.739901 kubelet[2401]: I1104 05:01:21.739643 2401 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 05:01:21.739901 kubelet[2401]: I1104 05:01:21.739848 2401 state_mem.go:36] "Initialized new in-memory state store" Nov 4 05:01:21.742053 kubelet[2401]: I1104 05:01:21.741830 2401 policy_none.go:49] "None policy: Start" Nov 4 05:01:21.742053 kubelet[2401]: I1104 05:01:21.741855 2401 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 4 05:01:21.742053 kubelet[2401]: I1104 05:01:21.741867 2401 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 4 05:01:21.743022 kubelet[2401]: I1104 05:01:21.742991 2401 policy_none.go:47] "Start" Nov 4 05:01:21.748149 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 4 05:01:21.759694 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 4 05:01:21.764087 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 4 05:01:21.774385 kubelet[2401]: E1104 05:01:21.774216 2401 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 05:01:21.774454 kubelet[2401]: I1104 05:01:21.774394 2401 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 05:01:21.774454 kubelet[2401]: I1104 05:01:21.774404 2401 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 05:01:21.775058 kubelet[2401]: I1104 05:01:21.774942 2401 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 05:01:21.776493 kubelet[2401]: E1104 05:01:21.776453 2401 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 05:01:21.776724 kubelet[2401]: E1104 05:01:21.776707 2401 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-237-150-130\" not found" Nov 4 05:01:21.850145 systemd[1]: Created slice kubepods-burstable-pod25328a358094d6e27211ad197b723283.slice - libcontainer container kubepods-burstable-pod25328a358094d6e27211ad197b723283.slice. Nov 4 05:01:21.860686 kubelet[2401]: E1104 05:01:21.860627 2401 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-150-130\" not found" node="172-237-150-130" Nov 4 05:01:21.864045 systemd[1]: Created slice kubepods-burstable-pod1fd308408e16e8955a5d591ac4b05f80.slice - libcontainer container kubepods-burstable-pod1fd308408e16e8955a5d591ac4b05f80.slice. Nov 4 05:01:21.875600 kubelet[2401]: E1104 05:01:21.875586 2401 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-150-130\" not found" node="172-237-150-130" Nov 4 05:01:21.877518 kubelet[2401]: I1104 05:01:21.877475 2401 kubelet_node_status.go:75] "Attempting to register node" node="172-237-150-130" Nov 4 05:01:21.878475 kubelet[2401]: E1104 05:01:21.878127 2401 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.150.130:6443/api/v1/nodes\": dial tcp 172.237.150.130:6443: connect: connection refused" node="172-237-150-130" Nov 4 05:01:21.879193 systemd[1]: Created slice kubepods-burstable-pod3fd85465e88350bd735d960290bbaf0f.slice - libcontainer container kubepods-burstable-pod3fd85465e88350bd735d960290bbaf0f.slice. Nov 4 05:01:21.882152 kubelet[2401]: E1104 05:01:21.882100 2401 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-150-130\" not found" node="172-237-150-130" Nov 4 05:01:21.908420 kubelet[2401]: I1104 05:01:21.908401 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/25328a358094d6e27211ad197b723283-k8s-certs\") pod \"kube-apiserver-172-237-150-130\" (UID: \"25328a358094d6e27211ad197b723283\") " pod="kube-system/kube-apiserver-172-237-150-130" Nov 4 05:01:21.908670 kubelet[2401]: E1104 05:01:21.908470 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.150.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-150-130?timeout=10s\": dial tcp 172.237.150.130:6443: connect: connection refused" interval="400ms" Nov 4 05:01:21.908670 kubelet[2401]: I1104 05:01:21.908486 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/25328a358094d6e27211ad197b723283-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-150-130\" (UID: \"25328a358094d6e27211ad197b723283\") " pod="kube-system/kube-apiserver-172-237-150-130" Nov 4 05:01:21.908670 kubelet[2401]: I1104 05:01:21.908510 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1fd308408e16e8955a5d591ac4b05f80-flexvolume-dir\") pod \"kube-controller-manager-172-237-150-130\" (UID: \"1fd308408e16e8955a5d591ac4b05f80\") " pod="kube-system/kube-controller-manager-172-237-150-130" Nov 4 05:01:21.908670 kubelet[2401]: I1104 05:01:21.908552 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3fd85465e88350bd735d960290bbaf0f-kubeconfig\") pod \"kube-scheduler-172-237-150-130\" (UID: \"3fd85465e88350bd735d960290bbaf0f\") " pod="kube-system/kube-scheduler-172-237-150-130" Nov 4 05:01:21.908670 kubelet[2401]: I1104 05:01:21.908566 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1fd308408e16e8955a5d591ac4b05f80-ca-certs\") pod \"kube-controller-manager-172-237-150-130\" (UID: \"1fd308408e16e8955a5d591ac4b05f80\") " pod="kube-system/kube-controller-manager-172-237-150-130" Nov 4 05:01:21.908809 kubelet[2401]: I1104 05:01:21.908581 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1fd308408e16e8955a5d591ac4b05f80-k8s-certs\") pod \"kube-controller-manager-172-237-150-130\" (UID: \"1fd308408e16e8955a5d591ac4b05f80\") " pod="kube-system/kube-controller-manager-172-237-150-130" Nov 4 05:01:21.908809 kubelet[2401]: I1104 05:01:21.908604 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1fd308408e16e8955a5d591ac4b05f80-kubeconfig\") pod \"kube-controller-manager-172-237-150-130\" (UID: \"1fd308408e16e8955a5d591ac4b05f80\") " pod="kube-system/kube-controller-manager-172-237-150-130" Nov 4 05:01:21.908809 kubelet[2401]: I1104 05:01:21.908618 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1fd308408e16e8955a5d591ac4b05f80-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-150-130\" (UID: \"1fd308408e16e8955a5d591ac4b05f80\") " pod="kube-system/kube-controller-manager-172-237-150-130" Nov 4 05:01:21.908809 kubelet[2401]: I1104 05:01:21.908634 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/25328a358094d6e27211ad197b723283-ca-certs\") pod \"kube-apiserver-172-237-150-130\" (UID: \"25328a358094d6e27211ad197b723283\") " pod="kube-system/kube-apiserver-172-237-150-130" Nov 4 05:01:22.081149 kubelet[2401]: I1104 05:01:22.080921 2401 kubelet_node_status.go:75] "Attempting to register node" node="172-237-150-130" Nov 4 05:01:22.081149 kubelet[2401]: E1104 05:01:22.081119 2401 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.150.130:6443/api/v1/nodes\": dial tcp 172.237.150.130:6443: connect: connection refused" node="172-237-150-130" Nov 4 05:01:22.163203 kubelet[2401]: E1104 05:01:22.163124 2401 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:22.164030 containerd[1605]: time="2025-11-04T05:01:22.163741762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-150-130,Uid:25328a358094d6e27211ad197b723283,Namespace:kube-system,Attempt:0,}" Nov 4 05:01:22.177216 kubelet[2401]: E1104 05:01:22.177198 2401 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:22.177614 containerd[1605]: time="2025-11-04T05:01:22.177509398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-150-130,Uid:1fd308408e16e8955a5d591ac4b05f80,Namespace:kube-system,Attempt:0,}" Nov 4 05:01:22.187121 kubelet[2401]: E1104 05:01:22.186998 2401 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:22.187742 containerd[1605]: time="2025-11-04T05:01:22.187588731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-150-130,Uid:3fd85465e88350bd735d960290bbaf0f,Namespace:kube-system,Attempt:0,}" Nov 4 05:01:22.309415 kubelet[2401]: E1104 05:01:22.309327 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.150.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-150-130?timeout=10s\": dial tcp 172.237.150.130:6443: connect: connection refused" interval="800ms" Nov 4 05:01:22.484432 kubelet[2401]: I1104 05:01:22.484110 2401 kubelet_node_status.go:75] "Attempting to register node" node="172-237-150-130" Nov 4 05:01:22.484758 kubelet[2401]: E1104 05:01:22.484715 2401 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.150.130:6443/api/v1/nodes\": dial tcp 172.237.150.130:6443: connect: connection refused" node="172-237-150-130" Nov 4 05:01:22.689429 kubelet[2401]: E1104 05:01:22.689377 2401 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.237.150.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.237.150.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 05:01:22.753561 kubelet[2401]: E1104 05:01:22.753477 2401 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.237.150.130:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-150-130&limit=500&resourceVersion=0\": dial tcp 172.237.150.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 05:01:22.775611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1015957910.mount: Deactivated successfully. Nov 4 05:01:22.780523 containerd[1605]: time="2025-11-04T05:01:22.780483143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 05:01:22.786661 containerd[1605]: time="2025-11-04T05:01:22.786198656Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 4 05:01:22.787525 containerd[1605]: time="2025-11-04T05:01:22.787484213Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 4 05:01:22.788063 containerd[1605]: time="2025-11-04T05:01:22.788032029Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 05:01:22.788385 containerd[1605]: time="2025-11-04T05:01:22.788362899Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 4 05:01:22.788607 containerd[1605]: time="2025-11-04T05:01:22.788572642Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 05:01:22.792736 containerd[1605]: time="2025-11-04T05:01:22.792695307Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 05:01:22.793874 containerd[1605]: time="2025-11-04T05:01:22.793401735Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 614.770174ms" Nov 4 05:01:22.794688 containerd[1605]: time="2025-11-04T05:01:22.794646556Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 629.238318ms" Nov 4 05:01:22.795205 containerd[1605]: time="2025-11-04T05:01:22.795166011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 05:01:22.795556 containerd[1605]: time="2025-11-04T05:01:22.795518950Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 606.567612ms" Nov 4 05:01:22.826810 containerd[1605]: time="2025-11-04T05:01:22.826760515Z" level=info msg="connecting to shim 95943720df92479cf5dfc36627a0860cc982df80a8a011a8f534f6d6c0eaf70c" address="unix:///run/containerd/s/248defbf602985a53197005fbe62182becfcc27c5e10f3eff0b2a9c7066400f7" namespace=k8s.io protocol=ttrpc version=3 Nov 4 05:01:22.839921 containerd[1605]: time="2025-11-04T05:01:22.839712360Z" level=info msg="connecting to shim 4ab995083e795fb5b28f7cd02d4126012f142845e79444920f5f2976a024ed22" address="unix:///run/containerd/s/ababa41be9344e1b3c08d8bd2bba4f5859268d6429c8fa4b15226023e8a13744" namespace=k8s.io protocol=ttrpc version=3 Nov 4 05:01:22.840171 containerd[1605]: time="2025-11-04T05:01:22.839854106Z" level=info msg="connecting to shim 3ac9f7501de8b4329d7f8588d71a932b263955a31c7a97a101573de2308a30e8" address="unix:///run/containerd/s/9bfdb4eecd34406fb151e95cfb41ee042d00aae8e49c4fe5456d8976514f1c0e" namespace=k8s.io protocol=ttrpc version=3 Nov 4 05:01:22.869447 systemd[1]: Started cri-containerd-95943720df92479cf5dfc36627a0860cc982df80a8a011a8f534f6d6c0eaf70c.scope - libcontainer container 95943720df92479cf5dfc36627a0860cc982df80a8a011a8f534f6d6c0eaf70c. Nov 4 05:01:22.881500 systemd[1]: Started cri-containerd-4ab995083e795fb5b28f7cd02d4126012f142845e79444920f5f2976a024ed22.scope - libcontainer container 4ab995083e795fb5b28f7cd02d4126012f142845e79444920f5f2976a024ed22. Nov 4 05:01:22.887582 systemd[1]: Started cri-containerd-3ac9f7501de8b4329d7f8588d71a932b263955a31c7a97a101573de2308a30e8.scope - libcontainer container 3ac9f7501de8b4329d7f8588d71a932b263955a31c7a97a101573de2308a30e8. Nov 4 05:01:22.908011 kubelet[2401]: E1104 05:01:22.907984 2401 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.237.150.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.237.150.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 05:01:22.962256 containerd[1605]: time="2025-11-04T05:01:22.962008966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-150-130,Uid:1fd308408e16e8955a5d591ac4b05f80,Namespace:kube-system,Attempt:0,} returns sandbox id \"95943720df92479cf5dfc36627a0860cc982df80a8a011a8f534f6d6c0eaf70c\"" Nov 4 05:01:22.965104 kubelet[2401]: E1104 05:01:22.965084 2401 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:22.968526 containerd[1605]: time="2025-11-04T05:01:22.968283650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-150-130,Uid:25328a358094d6e27211ad197b723283,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ab995083e795fb5b28f7cd02d4126012f142845e79444920f5f2976a024ed22\"" Nov 4 05:01:22.969599 kubelet[2401]: E1104 05:01:22.969559 2401 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:22.971805 containerd[1605]: time="2025-11-04T05:01:22.971729018Z" level=info msg="CreateContainer within sandbox \"95943720df92479cf5dfc36627a0860cc982df80a8a011a8f534f6d6c0eaf70c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 4 05:01:22.975561 containerd[1605]: time="2025-11-04T05:01:22.975537659Z" level=info msg="CreateContainer within sandbox \"4ab995083e795fb5b28f7cd02d4126012f142845e79444920f5f2976a024ed22\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 4 05:01:22.977645 containerd[1605]: time="2025-11-04T05:01:22.977591149Z" level=info msg="Container 40d538f679613f97154cacf7ccfa61a7275ad3d95957819b3d15e3b1275d339f: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:01:22.984255 containerd[1605]: time="2025-11-04T05:01:22.983283832Z" level=info msg="CreateContainer within sandbox \"95943720df92479cf5dfc36627a0860cc982df80a8a011a8f534f6d6c0eaf70c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"40d538f679613f97154cacf7ccfa61a7275ad3d95957819b3d15e3b1275d339f\"" Nov 4 05:01:22.984651 containerd[1605]: time="2025-11-04T05:01:22.984592708Z" level=info msg="StartContainer for \"40d538f679613f97154cacf7ccfa61a7275ad3d95957819b3d15e3b1275d339f\"" Nov 4 05:01:22.986021 containerd[1605]: time="2025-11-04T05:01:22.985961198Z" level=info msg="Container 19a02f1bf6688b7b95474573e4e8ac612b25db8131d53f7e29b1bafc3d2e431f: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:01:22.988960 containerd[1605]: time="2025-11-04T05:01:22.988937391Z" level=info msg="connecting to shim 40d538f679613f97154cacf7ccfa61a7275ad3d95957819b3d15e3b1275d339f" address="unix:///run/containerd/s/248defbf602985a53197005fbe62182becfcc27c5e10f3eff0b2a9c7066400f7" protocol=ttrpc version=3 Nov 4 05:01:22.991573 containerd[1605]: time="2025-11-04T05:01:22.991538896Z" level=info msg="CreateContainer within sandbox \"4ab995083e795fb5b28f7cd02d4126012f142845e79444920f5f2976a024ed22\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"19a02f1bf6688b7b95474573e4e8ac612b25db8131d53f7e29b1bafc3d2e431f\"" Nov 4 05:01:22.992563 containerd[1605]: time="2025-11-04T05:01:22.992489611Z" level=info msg="StartContainer for \"19a02f1bf6688b7b95474573e4e8ac612b25db8131d53f7e29b1bafc3d2e431f\"" Nov 4 05:01:22.993543 containerd[1605]: time="2025-11-04T05:01:22.993465046Z" level=info msg="connecting to shim 19a02f1bf6688b7b95474573e4e8ac612b25db8131d53f7e29b1bafc3d2e431f" address="unix:///run/containerd/s/ababa41be9344e1b3c08d8bd2bba4f5859268d6429c8fa4b15226023e8a13744" protocol=ttrpc version=3 Nov 4 05:01:23.022395 systemd[1]: Started cri-containerd-40d538f679613f97154cacf7ccfa61a7275ad3d95957819b3d15e3b1275d339f.scope - libcontainer container 40d538f679613f97154cacf7ccfa61a7275ad3d95957819b3d15e3b1275d339f. Nov 4 05:01:23.027308 systemd[1]: Started cri-containerd-19a02f1bf6688b7b95474573e4e8ac612b25db8131d53f7e29b1bafc3d2e431f.scope - libcontainer container 19a02f1bf6688b7b95474573e4e8ac612b25db8131d53f7e29b1bafc3d2e431f. Nov 4 05:01:23.033213 containerd[1605]: time="2025-11-04T05:01:23.033167852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-150-130,Uid:3fd85465e88350bd735d960290bbaf0f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ac9f7501de8b4329d7f8588d71a932b263955a31c7a97a101573de2308a30e8\"" Nov 4 05:01:23.034447 kubelet[2401]: E1104 05:01:23.034389 2401 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:23.038908 containerd[1605]: time="2025-11-04T05:01:23.038867098Z" level=info msg="CreateContainer within sandbox \"3ac9f7501de8b4329d7f8588d71a932b263955a31c7a97a101573de2308a30e8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 4 05:01:23.049307 containerd[1605]: time="2025-11-04T05:01:23.049271757Z" level=info msg="Container b244fd39ba232d460f5ba99f44ebc0bc0772f8c8c3f4679870e115659b6dd99e: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:01:23.053409 containerd[1605]: time="2025-11-04T05:01:23.053215647Z" level=info msg="CreateContainer within sandbox \"3ac9f7501de8b4329d7f8588d71a932b263955a31c7a97a101573de2308a30e8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b244fd39ba232d460f5ba99f44ebc0bc0772f8c8c3f4679870e115659b6dd99e\"" Nov 4 05:01:23.056250 containerd[1605]: time="2025-11-04T05:01:23.054607197Z" level=info msg="StartContainer for \"b244fd39ba232d460f5ba99f44ebc0bc0772f8c8c3f4679870e115659b6dd99e\"" Nov 4 05:01:23.056250 containerd[1605]: time="2025-11-04T05:01:23.055471725Z" level=info msg="connecting to shim b244fd39ba232d460f5ba99f44ebc0bc0772f8c8c3f4679870e115659b6dd99e" address="unix:///run/containerd/s/9bfdb4eecd34406fb151e95cfb41ee042d00aae8e49c4fe5456d8976514f1c0e" protocol=ttrpc version=3 Nov 4 05:01:23.081487 systemd[1]: Started cri-containerd-b244fd39ba232d460f5ba99f44ebc0bc0772f8c8c3f4679870e115659b6dd99e.scope - libcontainer container b244fd39ba232d460f5ba99f44ebc0bc0772f8c8c3f4679870e115659b6dd99e. Nov 4 05:01:23.087553 kubelet[2401]: E1104 05:01:23.087495 2401 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.237.150.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.237.150.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 05:01:23.112859 kubelet[2401]: E1104 05:01:23.112479 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.150.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-150-130?timeout=10s\": dial tcp 172.237.150.130:6443: connect: connection refused" interval="1.6s" Nov 4 05:01:23.144341 containerd[1605]: time="2025-11-04T05:01:23.144290891Z" level=info msg="StartContainer for \"40d538f679613f97154cacf7ccfa61a7275ad3d95957819b3d15e3b1275d339f\" returns successfully" Nov 4 05:01:23.193512 containerd[1605]: time="2025-11-04T05:01:23.193422577Z" level=info msg="StartContainer for \"19a02f1bf6688b7b95474573e4e8ac612b25db8131d53f7e29b1bafc3d2e431f\" returns successfully" Nov 4 05:01:23.208938 containerd[1605]: time="2025-11-04T05:01:23.208895014Z" level=info msg="StartContainer for \"b244fd39ba232d460f5ba99f44ebc0bc0772f8c8c3f4679870e115659b6dd99e\" returns successfully" Nov 4 05:01:23.286739 kubelet[2401]: I1104 05:01:23.286603 2401 kubelet_node_status.go:75] "Attempting to register node" node="172-237-150-130" Nov 4 05:01:23.749124 kubelet[2401]: E1104 05:01:23.748970 2401 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-150-130\" not found" node="172-237-150-130" Nov 4 05:01:23.749124 kubelet[2401]: E1104 05:01:23.749094 2401 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:23.752252 kubelet[2401]: E1104 05:01:23.751353 2401 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-150-130\" not found" node="172-237-150-130" Nov 4 05:01:23.752252 kubelet[2401]: E1104 05:01:23.751438 2401 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:23.754691 kubelet[2401]: E1104 05:01:23.754661 2401 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-150-130\" not found" node="172-237-150-130" Nov 4 05:01:23.754791 kubelet[2401]: E1104 05:01:23.754763 2401 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:24.633329 kubelet[2401]: I1104 05:01:24.633288 2401 kubelet_node_status.go:78] "Successfully registered node" node="172-237-150-130" Nov 4 05:01:24.633513 kubelet[2401]: E1104 05:01:24.633356 2401 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"172-237-150-130\": node \"172-237-150-130\" not found" Nov 4 05:01:24.658253 kubelet[2401]: E1104 05:01:24.658207 2401 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-237-150-130\" not found" Nov 4 05:01:24.756991 kubelet[2401]: E1104 05:01:24.756959 2401 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-150-130\" not found" node="172-237-150-130" Nov 4 05:01:24.757324 kubelet[2401]: E1104 05:01:24.757079 2401 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:24.758338 kubelet[2401]: E1104 05:01:24.758310 2401 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-237-150-130\" not found" Nov 4 05:01:24.758383 kubelet[2401]: E1104 05:01:24.758373 2401 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-150-130\" not found" node="172-237-150-130" Nov 4 05:01:24.758565 kubelet[2401]: E1104 05:01:24.758537 2401 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:24.859142 kubelet[2401]: E1104 05:01:24.859095 2401 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-237-150-130\" not found" Nov 4 05:01:24.960032 kubelet[2401]: E1104 05:01:24.959893 2401 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-237-150-130\" not found" Nov 4 05:01:25.007895 kubelet[2401]: I1104 05:01:25.007817 2401 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-150-130" Nov 4 05:01:25.012921 kubelet[2401]: E1104 05:01:25.012867 2401 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-237-150-130\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-237-150-130" Nov 4 05:01:25.012921 kubelet[2401]: I1104 05:01:25.012895 2401 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-150-130" Nov 4 05:01:25.014719 kubelet[2401]: E1104 05:01:25.014663 2401 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-150-130\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-237-150-130" Nov 4 05:01:25.014719 kubelet[2401]: I1104 05:01:25.014698 2401 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-150-130" Nov 4 05:01:25.016372 kubelet[2401]: E1104 05:01:25.016344 2401 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-237-150-130\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-237-150-130" Nov 4 05:01:25.687064 kubelet[2401]: I1104 05:01:25.686892 2401 apiserver.go:52] "Watching apiserver" Nov 4 05:01:25.707735 kubelet[2401]: I1104 05:01:25.707661 2401 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 4 05:01:26.496728 systemd[1]: Reload requested from client PID 2688 ('systemctl') (unit session-7.scope)... Nov 4 05:01:26.496749 systemd[1]: Reloading... Nov 4 05:01:26.621267 zram_generator::config[2742]: No configuration found. Nov 4 05:01:26.829364 systemd[1]: Reloading finished in 331 ms. Nov 4 05:01:26.870438 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 05:01:26.886210 systemd[1]: kubelet.service: Deactivated successfully. Nov 4 05:01:26.886606 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 05:01:26.886677 systemd[1]: kubelet.service: Consumed 528ms CPU time, 124.9M memory peak. Nov 4 05:01:26.890474 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 05:01:27.090723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 05:01:27.102867 (kubelet)[2784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 05:01:27.172505 kubelet[2784]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 05:01:27.172774 kubelet[2784]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 05:01:27.172774 kubelet[2784]: I1104 05:01:27.172590 2784 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 05:01:27.181844 kubelet[2784]: I1104 05:01:27.181803 2784 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 4 05:01:27.181844 kubelet[2784]: I1104 05:01:27.181829 2784 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 05:01:27.181918 kubelet[2784]: I1104 05:01:27.181857 2784 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 4 05:01:27.181918 kubelet[2784]: I1104 05:01:27.181864 2784 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 05:01:27.182064 kubelet[2784]: I1104 05:01:27.182033 2784 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 05:01:27.184426 kubelet[2784]: I1104 05:01:27.183922 2784 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 4 05:01:27.187924 kubelet[2784]: I1104 05:01:27.187907 2784 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 05:01:27.195148 kubelet[2784]: I1104 05:01:27.195086 2784 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 05:01:27.201156 kubelet[2784]: I1104 05:01:27.201136 2784 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 4 05:01:27.203341 kubelet[2784]: I1104 05:01:27.201716 2784 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 05:01:27.203341 kubelet[2784]: I1104 05:01:27.201740 2784 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-150-130","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 05:01:27.203341 kubelet[2784]: I1104 05:01:27.201873 2784 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 05:01:27.203341 kubelet[2784]: I1104 05:01:27.201881 2784 container_manager_linux.go:306] "Creating device plugin manager" Nov 4 05:01:27.203529 kubelet[2784]: I1104 05:01:27.201903 2784 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 4 05:01:27.203598 kubelet[2784]: I1104 05:01:27.203585 2784 state_mem.go:36] "Initialized new in-memory state store" Nov 4 05:01:27.203831 kubelet[2784]: I1104 05:01:27.203816 2784 kubelet.go:475] "Attempting to sync node with API server" Nov 4 05:01:27.203890 kubelet[2784]: I1104 05:01:27.203881 2784 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 05:01:27.203959 kubelet[2784]: I1104 05:01:27.203950 2784 kubelet.go:387] "Adding apiserver pod source" Nov 4 05:01:27.204020 kubelet[2784]: I1104 05:01:27.204012 2784 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 05:01:27.211984 kubelet[2784]: I1104 05:01:27.211922 2784 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 4 05:01:27.214859 kubelet[2784]: I1104 05:01:27.214793 2784 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 05:01:27.214859 kubelet[2784]: I1104 05:01:27.214841 2784 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 4 05:01:27.220344 kubelet[2784]: I1104 05:01:27.220328 2784 server.go:1262] "Started kubelet" Nov 4 05:01:27.221509 kubelet[2784]: I1104 05:01:27.221488 2784 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 05:01:27.232132 kubelet[2784]: I1104 05:01:27.232071 2784 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 05:01:27.233531 kubelet[2784]: I1104 05:01:27.233517 2784 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 4 05:01:27.245628 kubelet[2784]: I1104 05:01:27.233784 2784 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 4 05:01:27.245697 kubelet[2784]: I1104 05:01:27.234590 2784 server.go:310] "Adding debug handlers to kubelet server" Nov 4 05:01:27.248412 kubelet[2784]: I1104 05:01:27.234654 2784 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 05:01:27.248999 kubelet[2784]: I1104 05:01:27.248957 2784 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 4 05:01:27.249465 kubelet[2784]: I1104 05:01:27.249428 2784 factory.go:223] Registration of the systemd container factory successfully Nov 4 05:01:27.249668 kubelet[2784]: I1104 05:01:27.249628 2784 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 05:01:27.249789 kubelet[2784]: I1104 05:01:27.249775 2784 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 05:01:27.251581 kubelet[2784]: I1104 05:01:27.245019 2784 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 4 05:01:27.253635 kubelet[2784]: I1104 05:01:27.253619 2784 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 4 05:01:27.253705 kubelet[2784]: I1104 05:01:27.253695 2784 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 4 05:01:27.253855 kubelet[2784]: I1104 05:01:27.253841 2784 kubelet.go:2427] "Starting kubelet main sync loop" Nov 4 05:01:27.254105 kubelet[2784]: E1104 05:01:27.254054 2784 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 05:01:27.254190 kubelet[2784]: E1104 05:01:27.254132 2784 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 05:01:27.254482 kubelet[2784]: I1104 05:01:27.241071 2784 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 05:01:27.255744 kubelet[2784]: I1104 05:01:27.255714 2784 reconciler.go:29] "Reconciler: start to sync state" Nov 4 05:01:27.257086 kubelet[2784]: I1104 05:01:27.257069 2784 factory.go:223] Registration of the containerd container factory successfully Nov 4 05:01:27.316958 kubelet[2784]: I1104 05:01:27.316901 2784 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 05:01:27.316958 kubelet[2784]: I1104 05:01:27.316916 2784 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 05:01:27.317168 kubelet[2784]: I1104 05:01:27.317076 2784 state_mem.go:36] "Initialized new in-memory state store" Nov 4 05:01:27.317345 kubelet[2784]: I1104 05:01:27.317329 2784 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 4 05:01:27.317416 kubelet[2784]: I1104 05:01:27.317397 2784 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 4 05:01:27.317475 kubelet[2784]: I1104 05:01:27.317466 2784 policy_none.go:49] "None policy: Start" Nov 4 05:01:27.317521 kubelet[2784]: I1104 05:01:27.317513 2784 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 4 05:01:27.317700 kubelet[2784]: I1104 05:01:27.317555 2784 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 4 05:01:27.317700 kubelet[2784]: I1104 05:01:27.317640 2784 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 4 05:01:27.317700 kubelet[2784]: I1104 05:01:27.317648 2784 policy_none.go:47] "Start" Nov 4 05:01:27.322323 kubelet[2784]: E1104 05:01:27.322307 2784 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 05:01:27.323316 kubelet[2784]: I1104 05:01:27.322797 2784 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 05:01:27.323316 kubelet[2784]: I1104 05:01:27.322816 2784 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 05:01:27.323316 kubelet[2784]: I1104 05:01:27.322999 2784 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 05:01:27.331278 kubelet[2784]: E1104 05:01:27.329974 2784 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 05:01:27.355283 kubelet[2784]: I1104 05:01:27.355184 2784 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-150-130" Nov 4 05:01:27.355878 kubelet[2784]: I1104 05:01:27.355483 2784 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-150-130" Nov 4 05:01:27.356056 kubelet[2784]: I1104 05:01:27.355608 2784 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-150-130" Nov 4 05:01:27.438252 kubelet[2784]: I1104 05:01:27.438188 2784 kubelet_node_status.go:75] "Attempting to register node" node="172-237-150-130" Nov 4 05:01:27.445011 kubelet[2784]: I1104 05:01:27.444980 2784 kubelet_node_status.go:124] "Node was previously registered" node="172-237-150-130" Nov 4 05:01:27.445161 kubelet[2784]: I1104 05:01:27.445054 2784 kubelet_node_status.go:78] "Successfully registered node" node="172-237-150-130" Nov 4 05:01:27.456683 kubelet[2784]: I1104 05:01:27.456655 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/25328a358094d6e27211ad197b723283-ca-certs\") pod \"kube-apiserver-172-237-150-130\" (UID: \"25328a358094d6e27211ad197b723283\") " pod="kube-system/kube-apiserver-172-237-150-130" Nov 4 05:01:27.456755 kubelet[2784]: I1104 05:01:27.456690 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/25328a358094d6e27211ad197b723283-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-150-130\" (UID: \"25328a358094d6e27211ad197b723283\") " pod="kube-system/kube-apiserver-172-237-150-130" Nov 4 05:01:27.456755 kubelet[2784]: I1104 05:01:27.456710 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1fd308408e16e8955a5d591ac4b05f80-ca-certs\") pod \"kube-controller-manager-172-237-150-130\" (UID: \"1fd308408e16e8955a5d591ac4b05f80\") " pod="kube-system/kube-controller-manager-172-237-150-130" Nov 4 05:01:27.456755 kubelet[2784]: I1104 05:01:27.456725 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1fd308408e16e8955a5d591ac4b05f80-kubeconfig\") pod \"kube-controller-manager-172-237-150-130\" (UID: \"1fd308408e16e8955a5d591ac4b05f80\") " pod="kube-system/kube-controller-manager-172-237-150-130" Nov 4 05:01:27.456755 kubelet[2784]: I1104 05:01:27.456739 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1fd308408e16e8955a5d591ac4b05f80-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-150-130\" (UID: \"1fd308408e16e8955a5d591ac4b05f80\") " pod="kube-system/kube-controller-manager-172-237-150-130" Nov 4 05:01:27.456755 kubelet[2784]: I1104 05:01:27.456755 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3fd85465e88350bd735d960290bbaf0f-kubeconfig\") pod \"kube-scheduler-172-237-150-130\" (UID: \"3fd85465e88350bd735d960290bbaf0f\") " pod="kube-system/kube-scheduler-172-237-150-130" Nov 4 05:01:27.456902 kubelet[2784]: I1104 05:01:27.456769 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/25328a358094d6e27211ad197b723283-k8s-certs\") pod \"kube-apiserver-172-237-150-130\" (UID: \"25328a358094d6e27211ad197b723283\") " pod="kube-system/kube-apiserver-172-237-150-130" Nov 4 05:01:27.456902 kubelet[2784]: I1104 05:01:27.456783 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1fd308408e16e8955a5d591ac4b05f80-flexvolume-dir\") pod \"kube-controller-manager-172-237-150-130\" (UID: \"1fd308408e16e8955a5d591ac4b05f80\") " pod="kube-system/kube-controller-manager-172-237-150-130" Nov 4 05:01:27.456902 kubelet[2784]: I1104 05:01:27.456797 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1fd308408e16e8955a5d591ac4b05f80-k8s-certs\") pod \"kube-controller-manager-172-237-150-130\" (UID: \"1fd308408e16e8955a5d591ac4b05f80\") " pod="kube-system/kube-controller-manager-172-237-150-130" Nov 4 05:01:27.495181 sudo[2821]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 4 05:01:27.495594 sudo[2821]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 4 05:01:27.663029 kubelet[2784]: E1104 05:01:27.662906 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:27.664807 kubelet[2784]: E1104 05:01:27.664768 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:27.666046 kubelet[2784]: E1104 05:01:27.666016 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:27.822037 sudo[2821]: pam_unix(sudo:session): session closed for user root Nov 4 05:01:28.209557 kubelet[2784]: I1104 05:01:28.209478 2784 apiserver.go:52] "Watching apiserver" Nov 4 05:01:28.246369 kubelet[2784]: I1104 05:01:28.246281 2784 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 4 05:01:28.304315 kubelet[2784]: E1104 05:01:28.303274 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:28.304315 kubelet[2784]: E1104 05:01:28.303393 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:28.304621 kubelet[2784]: E1104 05:01:28.304604 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:28.330069 kubelet[2784]: I1104 05:01:28.329989 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-237-150-130" podStartSLOduration=1.32997791 podStartE2EDuration="1.32997791s" podCreationTimestamp="2025-11-04 05:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 05:01:28.329813921 +0000 UTC m=+1.220149698" watchObservedRunningTime="2025-11-04 05:01:28.32997791 +0000 UTC m=+1.220313667" Nov 4 05:01:28.330344 kubelet[2784]: I1104 05:01:28.330095 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-237-150-130" podStartSLOduration=1.3300904199999999 podStartE2EDuration="1.33009042s" podCreationTimestamp="2025-11-04 05:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 05:01:28.321907012 +0000 UTC m=+1.212242769" watchObservedRunningTime="2025-11-04 05:01:28.33009042 +0000 UTC m=+1.220426177" Nov 4 05:01:28.348431 kubelet[2784]: I1104 05:01:28.348351 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-237-150-130" podStartSLOduration=1.3483017130000001 podStartE2EDuration="1.348301713s" podCreationTimestamp="2025-11-04 05:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 05:01:28.339444275 +0000 UTC m=+1.229780032" watchObservedRunningTime="2025-11-04 05:01:28.348301713 +0000 UTC m=+1.238637470" Nov 4 05:01:29.106435 sudo[1859]: pam_unix(sudo:session): session closed for user root Nov 4 05:01:29.157011 sshd[1858]: Connection closed by 139.178.89.65 port 55860 Nov 4 05:01:29.159079 sshd-session[1855]: pam_unix(sshd:session): session closed for user core Nov 4 05:01:29.162985 systemd[1]: sshd@6-172.237.150.130:22-139.178.89.65:55860.service: Deactivated successfully. Nov 4 05:01:29.165888 systemd[1]: session-7.scope: Deactivated successfully. Nov 4 05:01:29.167060 systemd[1]: session-7.scope: Consumed 3.942s CPU time, 273.4M memory peak. Nov 4 05:01:29.171716 systemd-logind[1584]: Session 7 logged out. Waiting for processes to exit. Nov 4 05:01:29.175325 systemd-logind[1584]: Removed session 7. Nov 4 05:01:29.309856 kubelet[2784]: E1104 05:01:29.309798 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:29.310945 kubelet[2784]: E1104 05:01:29.310805 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:29.311565 kubelet[2784]: E1104 05:01:29.311544 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:29.999875 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 4 05:01:31.800446 kubelet[2784]: I1104 05:01:31.800367 2784 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 4 05:01:31.801250 containerd[1605]: time="2025-11-04T05:01:31.801181835Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 4 05:01:31.801697 kubelet[2784]: I1104 05:01:31.801676 2784 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 4 05:01:32.757408 kubelet[2784]: E1104 05:01:32.757363 2784 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:172-237-150-130\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-237-150-130' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Nov 4 05:01:32.757537 kubelet[2784]: E1104 05:01:32.757441 2784 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:172-237-150-130\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-237-150-130' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Nov 4 05:01:32.757585 kubelet[2784]: E1104 05:01:32.757211 2784 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-774f8\" is forbidden: User \"system:node:172-237-150-130\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-237-150-130' and this object" podUID="3299b83d-f19c-4b85-9d35-9ac56803d54e" pod="kube-system/kube-proxy-774f8" Nov 4 05:01:32.760136 systemd[1]: Created slice kubepods-besteffort-pod3299b83d_f19c_4b85_9d35_9ac56803d54e.slice - libcontainer container kubepods-besteffort-pod3299b83d_f19c_4b85_9d35_9ac56803d54e.slice. Nov 4 05:01:32.779402 kubelet[2784]: E1104 05:01:32.779364 2784 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:172-237-150-130\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-237-150-130' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-config\"" type="*v1.ConfigMap" Nov 4 05:01:32.779502 kubelet[2784]: E1104 05:01:32.779434 2784 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:172-237-150-130\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-237-150-130' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-clustermesh\"" type="*v1.Secret" Nov 4 05:01:32.779502 kubelet[2784]: E1104 05:01:32.779465 2784 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-774f8\" is forbidden: User \"system:node:172-237-150-130\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-237-150-130' and this object" podUID="3299b83d-f19c-4b85-9d35-9ac56803d54e" pod="kube-system/kube-proxy-774f8" Nov 4 05:01:32.779634 kubelet[2784]: E1104 05:01:32.779567 2784 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:172-237-150-130\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-237-150-130' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"hubble-server-certs\"" type="*v1.Secret" Nov 4 05:01:32.783111 systemd[1]: Created slice kubepods-burstable-podeb6a131f_8eeb_4fd4_9ac6_2c79a8b74fd3.slice - libcontainer container kubepods-burstable-podeb6a131f_8eeb_4fd4_9ac6_2c79a8b74fd3.slice. Nov 4 05:01:32.794457 kubelet[2784]: I1104 05:01:32.794431 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3299b83d-f19c-4b85-9d35-9ac56803d54e-lib-modules\") pod \"kube-proxy-774f8\" (UID: \"3299b83d-f19c-4b85-9d35-9ac56803d54e\") " pod="kube-system/kube-proxy-774f8" Nov 4 05:01:32.794859 kubelet[2784]: I1104 05:01:32.794534 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-hostproc\") pod \"cilium-zf525\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " pod="kube-system/cilium-zf525" Nov 4 05:01:32.794859 kubelet[2784]: I1104 05:01:32.794552 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-cni-path\") pod \"cilium-zf525\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " pod="kube-system/cilium-zf525" Nov 4 05:01:32.794859 kubelet[2784]: I1104 05:01:32.794577 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9rl7\" (UniqueName: \"kubernetes.io/projected/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-kube-api-access-g9rl7\") pod \"cilium-zf525\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " pod="kube-system/cilium-zf525" Nov 4 05:01:32.794859 kubelet[2784]: I1104 05:01:32.794592 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-cilium-run\") pod \"cilium-zf525\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " pod="kube-system/cilium-zf525" Nov 4 05:01:32.794859 kubelet[2784]: I1104 05:01:32.794604 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-xtables-lock\") pod \"cilium-zf525\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " pod="kube-system/cilium-zf525" Nov 4 05:01:32.794859 kubelet[2784]: I1104 05:01:32.794618 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-cilium-config-path\") pod \"cilium-zf525\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " pod="kube-system/cilium-zf525" Nov 4 05:01:32.795004 kubelet[2784]: I1104 05:01:32.794631 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3299b83d-f19c-4b85-9d35-9ac56803d54e-xtables-lock\") pod \"kube-proxy-774f8\" (UID: \"3299b83d-f19c-4b85-9d35-9ac56803d54e\") " pod="kube-system/kube-proxy-774f8" Nov 4 05:01:32.795004 kubelet[2784]: I1104 05:01:32.794643 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqhdl\" (UniqueName: \"kubernetes.io/projected/3299b83d-f19c-4b85-9d35-9ac56803d54e-kube-api-access-xqhdl\") pod \"kube-proxy-774f8\" (UID: \"3299b83d-f19c-4b85-9d35-9ac56803d54e\") " pod="kube-system/kube-proxy-774f8" Nov 4 05:01:32.795004 kubelet[2784]: I1104 05:01:32.794655 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-bpf-maps\") pod \"cilium-zf525\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " pod="kube-system/cilium-zf525" Nov 4 05:01:32.795004 kubelet[2784]: I1104 05:01:32.794668 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-etc-cni-netd\") pod \"cilium-zf525\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " pod="kube-system/cilium-zf525" Nov 4 05:01:32.795004 kubelet[2784]: I1104 05:01:32.794680 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-lib-modules\") pod \"cilium-zf525\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " pod="kube-system/cilium-zf525" Nov 4 05:01:32.795004 kubelet[2784]: I1104 05:01:32.794693 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-hubble-tls\") pod \"cilium-zf525\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " pod="kube-system/cilium-zf525" Nov 4 05:01:32.795121 kubelet[2784]: I1104 05:01:32.794710 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3299b83d-f19c-4b85-9d35-9ac56803d54e-kube-proxy\") pod \"kube-proxy-774f8\" (UID: \"3299b83d-f19c-4b85-9d35-9ac56803d54e\") " pod="kube-system/kube-proxy-774f8" Nov 4 05:01:32.795121 kubelet[2784]: I1104 05:01:32.794724 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-cilium-cgroup\") pod \"cilium-zf525\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " pod="kube-system/cilium-zf525" Nov 4 05:01:32.795121 kubelet[2784]: I1104 05:01:32.794737 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-clustermesh-secrets\") pod \"cilium-zf525\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " pod="kube-system/cilium-zf525" Nov 4 05:01:32.795121 kubelet[2784]: I1104 05:01:32.794760 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-host-proc-sys-net\") pod \"cilium-zf525\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " pod="kube-system/cilium-zf525" Nov 4 05:01:32.795121 kubelet[2784]: I1104 05:01:32.794774 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-host-proc-sys-kernel\") pod \"cilium-zf525\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " pod="kube-system/cilium-zf525" Nov 4 05:01:33.023857 kubelet[2784]: E1104 05:01:33.023479 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:33.071087 systemd[1]: Created slice kubepods-besteffort-pod555c0676_a74e_4bdd_92e4_6446cd997796.slice - libcontainer container kubepods-besteffort-pod555c0676_a74e_4bdd_92e4_6446cd997796.slice. Nov 4 05:01:33.098134 kubelet[2784]: I1104 05:01:33.098067 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/555c0676-a74e-4bdd-92e4-6446cd997796-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-zl62n\" (UID: \"555c0676-a74e-4bdd-92e4-6446cd997796\") " pod="kube-system/cilium-operator-6f9c7c5859-zl62n" Nov 4 05:01:33.098134 kubelet[2784]: I1104 05:01:33.098121 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkjxh\" (UniqueName: \"kubernetes.io/projected/555c0676-a74e-4bdd-92e4-6446cd997796-kube-api-access-rkjxh\") pod \"cilium-operator-6f9c7c5859-zl62n\" (UID: \"555c0676-a74e-4bdd-92e4-6446cd997796\") " pod="kube-system/cilium-operator-6f9c7c5859-zl62n" Nov 4 05:01:33.317968 kubelet[2784]: E1104 05:01:33.317784 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:33.896419 kubelet[2784]: E1104 05:01:33.896275 2784 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Nov 4 05:01:33.898628 kubelet[2784]: E1104 05:01:33.898584 2784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-cilium-config-path podName:eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3 nodeName:}" failed. No retries permitted until 2025-11-04 05:01:34.396678441 +0000 UTC m=+7.287014198 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-cilium-config-path") pod "cilium-zf525" (UID: "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3") : failed to sync configmap cache: timed out waiting for the condition Nov 4 05:01:33.902356 kubelet[2784]: E1104 05:01:33.902208 2784 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 4 05:01:33.902356 kubelet[2784]: E1104 05:01:33.902260 2784 projected.go:196] Error preparing data for projected volume kube-api-access-xqhdl for pod kube-system/kube-proxy-774f8: failed to sync configmap cache: timed out waiting for the condition Nov 4 05:01:33.902356 kubelet[2784]: E1104 05:01:33.902332 2784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3299b83d-f19c-4b85-9d35-9ac56803d54e-kube-api-access-xqhdl podName:3299b83d-f19c-4b85-9d35-9ac56803d54e nodeName:}" failed. No retries permitted until 2025-11-04 05:01:34.402316274 +0000 UTC m=+7.292652031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xqhdl" (UniqueName: "kubernetes.io/projected/3299b83d-f19c-4b85-9d35-9ac56803d54e-kube-api-access-xqhdl") pod "kube-proxy-774f8" (UID: "3299b83d-f19c-4b85-9d35-9ac56803d54e") : failed to sync configmap cache: timed out waiting for the condition Nov 4 05:01:33.903189 kubelet[2784]: E1104 05:01:33.903098 2784 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 4 05:01:33.903189 kubelet[2784]: E1104 05:01:33.903124 2784 projected.go:196] Error preparing data for projected volume kube-api-access-g9rl7 for pod kube-system/cilium-zf525: failed to sync configmap cache: timed out waiting for the condition Nov 4 05:01:33.903189 kubelet[2784]: E1104 05:01:33.903168 2784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-kube-api-access-g9rl7 podName:eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3 nodeName:}" failed. No retries permitted until 2025-11-04 05:01:34.403155993 +0000 UTC m=+7.293491750 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-g9rl7" (UniqueName: "kubernetes.io/projected/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-kube-api-access-g9rl7") pod "cilium-zf525" (UID: "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3") : failed to sync configmap cache: timed out waiting for the condition Nov 4 05:01:34.199739 kubelet[2784]: E1104 05:01:34.199196 2784 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Nov 4 05:01:34.199739 kubelet[2784]: E1104 05:01:34.199311 2784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/555c0676-a74e-4bdd-92e4-6446cd997796-cilium-config-path podName:555c0676-a74e-4bdd-92e4-6446cd997796 nodeName:}" failed. No retries permitted until 2025-11-04 05:01:34.699291246 +0000 UTC m=+7.589627013 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/555c0676-a74e-4bdd-92e4-6446cd997796-cilium-config-path") pod "cilium-operator-6f9c7c5859-zl62n" (UID: "555c0676-a74e-4bdd-92e4-6446cd997796") : failed to sync configmap cache: timed out waiting for the condition Nov 4 05:01:34.204251 kubelet[2784]: E1104 05:01:34.204189 2784 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 4 05:01:34.204251 kubelet[2784]: E1104 05:01:34.204216 2784 projected.go:196] Error preparing data for projected volume kube-api-access-rkjxh for pod kube-system/cilium-operator-6f9c7c5859-zl62n: failed to sync configmap cache: timed out waiting for the condition Nov 4 05:01:34.204326 kubelet[2784]: E1104 05:01:34.204284 2784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/555c0676-a74e-4bdd-92e4-6446cd997796-kube-api-access-rkjxh podName:555c0676-a74e-4bdd-92e4-6446cd997796 nodeName:}" failed. No retries permitted until 2025-11-04 05:01:34.704272805 +0000 UTC m=+7.594608562 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rkjxh" (UniqueName: "kubernetes.io/projected/555c0676-a74e-4bdd-92e4-6446cd997796-kube-api-access-rkjxh") pod "cilium-operator-6f9c7c5859-zl62n" (UID: "555c0676-a74e-4bdd-92e4-6446cd997796") : failed to sync configmap cache: timed out waiting for the condition Nov 4 05:01:34.320170 kubelet[2784]: E1104 05:01:34.320068 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:34.570801 kubelet[2784]: E1104 05:01:34.570761 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:34.571511 containerd[1605]: time="2025-11-04T05:01:34.571465560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-774f8,Uid:3299b83d-f19c-4b85-9d35-9ac56803d54e,Namespace:kube-system,Attempt:0,}" Nov 4 05:01:34.589454 kubelet[2784]: E1104 05:01:34.589216 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:34.589763 containerd[1605]: time="2025-11-04T05:01:34.589667796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zf525,Uid:eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3,Namespace:kube-system,Attempt:0,}" Nov 4 05:01:34.596499 containerd[1605]: time="2025-11-04T05:01:34.596404081Z" level=info msg="connecting to shim 4665fe000aeab055a6ac8733c603e7e7829d5c4a980cd3cb3c22845574edea75" address="unix:///run/containerd/s/fbe7a3fea30ae0d7b941934532a01a4b30e0112d0d1f77bb65bd12514103e610" namespace=k8s.io protocol=ttrpc version=3 Nov 4 05:01:34.615593 containerd[1605]: time="2025-11-04T05:01:34.615534588Z" level=info msg="connecting to shim 30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab" address="unix:///run/containerd/s/a236b31a9139d943cd0a866dc3e0ccc3e4934c9e29eb8aee5aa8b10a2f41eb5d" namespace=k8s.io protocol=ttrpc version=3 Nov 4 05:01:34.627372 systemd[1]: Started cri-containerd-4665fe000aeab055a6ac8733c603e7e7829d5c4a980cd3cb3c22845574edea75.scope - libcontainer container 4665fe000aeab055a6ac8733c603e7e7829d5c4a980cd3cb3c22845574edea75. Nov 4 05:01:34.654365 systemd[1]: Started cri-containerd-30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab.scope - libcontainer container 30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab. Nov 4 05:01:34.683376 containerd[1605]: time="2025-11-04T05:01:34.683278418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-774f8,Uid:3299b83d-f19c-4b85-9d35-9ac56803d54e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4665fe000aeab055a6ac8733c603e7e7829d5c4a980cd3cb3c22845574edea75\"" Nov 4 05:01:34.684552 kubelet[2784]: E1104 05:01:34.684523 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:34.696098 containerd[1605]: time="2025-11-04T05:01:34.696010718Z" level=info msg="CreateContainer within sandbox \"4665fe000aeab055a6ac8733c603e7e7829d5c4a980cd3cb3c22845574edea75\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 4 05:01:34.714915 containerd[1605]: time="2025-11-04T05:01:34.714425135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zf525,Uid:eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3,Namespace:kube-system,Attempt:0,} returns sandbox id \"30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab\"" Nov 4 05:01:34.715172 containerd[1605]: time="2025-11-04T05:01:34.715140724Z" level=info msg="Container 5493fea866f3c0be240de91db27c339c301d3e5307f69c22d04f028616931d57: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:01:34.719466 kubelet[2784]: E1104 05:01:34.719447 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:34.722972 containerd[1605]: time="2025-11-04T05:01:34.722118312Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 4 05:01:34.727578 containerd[1605]: time="2025-11-04T05:01:34.726783595Z" level=info msg="CreateContainer within sandbox \"4665fe000aeab055a6ac8733c603e7e7829d5c4a980cd3cb3c22845574edea75\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5493fea866f3c0be240de91db27c339c301d3e5307f69c22d04f028616931d57\"" Nov 4 05:01:34.728449 containerd[1605]: time="2025-11-04T05:01:34.728377031Z" level=info msg="StartContainer for \"5493fea866f3c0be240de91db27c339c301d3e5307f69c22d04f028616931d57\"" Nov 4 05:01:34.732016 containerd[1605]: time="2025-11-04T05:01:34.731978267Z" level=info msg="connecting to shim 5493fea866f3c0be240de91db27c339c301d3e5307f69c22d04f028616931d57" address="unix:///run/containerd/s/fbe7a3fea30ae0d7b941934532a01a4b30e0112d0d1f77bb65bd12514103e610" protocol=ttrpc version=3 Nov 4 05:01:34.755371 systemd[1]: Started cri-containerd-5493fea866f3c0be240de91db27c339c301d3e5307f69c22d04f028616931d57.scope - libcontainer container 5493fea866f3c0be240de91db27c339c301d3e5307f69c22d04f028616931d57. Nov 4 05:01:34.809079 kubelet[2784]: E1104 05:01:34.806713 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:34.821449 containerd[1605]: time="2025-11-04T05:01:34.821313267Z" level=info msg="StartContainer for \"5493fea866f3c0be240de91db27c339c301d3e5307f69c22d04f028616931d57\" returns successfully" Nov 4 05:01:34.875904 kubelet[2784]: E1104 05:01:34.875807 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:34.876420 containerd[1605]: time="2025-11-04T05:01:34.876370670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-zl62n,Uid:555c0676-a74e-4bdd-92e4-6446cd997796,Namespace:kube-system,Attempt:0,}" Nov 4 05:01:34.901165 containerd[1605]: time="2025-11-04T05:01:34.901101710Z" level=info msg="connecting to shim 864e54aa58623fef61eb6f27accb4a6ee992419b082d178ec658b8746747bf62" address="unix:///run/containerd/s/4dceee13ee9ee88d61ddf00e54491fc755e568ee09d5ff77fe8001c74618ec57" namespace=k8s.io protocol=ttrpc version=3 Nov 4 05:01:34.941931 systemd[1]: Started cri-containerd-864e54aa58623fef61eb6f27accb4a6ee992419b082d178ec658b8746747bf62.scope - libcontainer container 864e54aa58623fef61eb6f27accb4a6ee992419b082d178ec658b8746747bf62. Nov 4 05:01:34.995927 containerd[1605]: time="2025-11-04T05:01:34.995889565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-zl62n,Uid:555c0676-a74e-4bdd-92e4-6446cd997796,Namespace:kube-system,Attempt:0,} returns sandbox id \"864e54aa58623fef61eb6f27accb4a6ee992419b082d178ec658b8746747bf62\"" Nov 4 05:01:34.997183 kubelet[2784]: E1104 05:01:34.997141 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:35.330569 kubelet[2784]: E1104 05:01:35.330528 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:35.333060 kubelet[2784]: E1104 05:01:35.333015 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:35.358811 kubelet[2784]: I1104 05:01:35.358212 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-774f8" podStartSLOduration=3.358081071 podStartE2EDuration="3.358081071s" podCreationTimestamp="2025-11-04 05:01:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 05:01:35.347351592 +0000 UTC m=+8.237687349" watchObservedRunningTime="2025-11-04 05:01:35.358081071 +0000 UTC m=+8.248416828" Nov 4 05:01:36.338005 kubelet[2784]: E1104 05:01:36.337955 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:38.423106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3348939243.mount: Deactivated successfully. Nov 4 05:01:39.030256 kubelet[2784]: E1104 05:01:39.029361 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:40.133136 containerd[1605]: time="2025-11-04T05:01:40.133072899Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:01:40.134096 containerd[1605]: time="2025-11-04T05:01:40.133893921Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=155039993" Nov 4 05:01:40.134553 containerd[1605]: time="2025-11-04T05:01:40.134519875Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:01:40.135970 containerd[1605]: time="2025-11-04T05:01:40.135943181Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.413562214s" Nov 4 05:01:40.136045 containerd[1605]: time="2025-11-04T05:01:40.136031224Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 4 05:01:40.137748 containerd[1605]: time="2025-11-04T05:01:40.137730810Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 4 05:01:40.141276 containerd[1605]: time="2025-11-04T05:01:40.140862172Z" level=info msg="CreateContainer within sandbox \"30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 4 05:01:40.150277 containerd[1605]: time="2025-11-04T05:01:40.149272120Z" level=info msg="Container 2669eb3de51afd4a7752c9fad0e497123ee0e1d4096dbe15337086563217b578: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:01:40.161946 containerd[1605]: time="2025-11-04T05:01:40.161916422Z" level=info msg="CreateContainer within sandbox \"30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2669eb3de51afd4a7752c9fad0e497123ee0e1d4096dbe15337086563217b578\"" Nov 4 05:01:40.162891 containerd[1605]: time="2025-11-04T05:01:40.162814637Z" level=info msg="StartContainer for \"2669eb3de51afd4a7752c9fad0e497123ee0e1d4096dbe15337086563217b578\"" Nov 4 05:01:40.163976 containerd[1605]: time="2025-11-04T05:01:40.163941381Z" level=info msg="connecting to shim 2669eb3de51afd4a7752c9fad0e497123ee0e1d4096dbe15337086563217b578" address="unix:///run/containerd/s/a236b31a9139d943cd0a866dc3e0ccc3e4934c9e29eb8aee5aa8b10a2f41eb5d" protocol=ttrpc version=3 Nov 4 05:01:40.190357 systemd[1]: Started cri-containerd-2669eb3de51afd4a7752c9fad0e497123ee0e1d4096dbe15337086563217b578.scope - libcontainer container 2669eb3de51afd4a7752c9fad0e497123ee0e1d4096dbe15337086563217b578. Nov 4 05:01:40.227275 containerd[1605]: time="2025-11-04T05:01:40.227215036Z" level=info msg="StartContainer for \"2669eb3de51afd4a7752c9fad0e497123ee0e1d4096dbe15337086563217b578\" returns successfully" Nov 4 05:01:40.242076 systemd[1]: cri-containerd-2669eb3de51afd4a7752c9fad0e497123ee0e1d4096dbe15337086563217b578.scope: Deactivated successfully. Nov 4 05:01:40.243901 containerd[1605]: time="2025-11-04T05:01:40.243855934Z" level=info msg="received exit event container_id:\"2669eb3de51afd4a7752c9fad0e497123ee0e1d4096dbe15337086563217b578\" id:\"2669eb3de51afd4a7752c9fad0e497123ee0e1d4096dbe15337086563217b578\" pid:3207 exited_at:{seconds:1762232500 nanos:243443368}" Nov 4 05:01:40.266724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2669eb3de51afd4a7752c9fad0e497123ee0e1d4096dbe15337086563217b578-rootfs.mount: Deactivated successfully. Nov 4 05:01:40.352472 kubelet[2784]: E1104 05:01:40.352436 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:41.362717 kubelet[2784]: E1104 05:01:41.362660 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:41.368199 containerd[1605]: time="2025-11-04T05:01:41.368068395Z" level=info msg="CreateContainer within sandbox \"30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 4 05:01:41.393197 containerd[1605]: time="2025-11-04T05:01:41.393140892Z" level=info msg="Container b3779306c3f9dbc7a9f9092829c1b3e9215c9bb260a9fb31786aa63eaedc631c: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:01:41.401921 containerd[1605]: time="2025-11-04T05:01:41.401886135Z" level=info msg="CreateContainer within sandbox \"30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b3779306c3f9dbc7a9f9092829c1b3e9215c9bb260a9fb31786aa63eaedc631c\"" Nov 4 05:01:41.403947 containerd[1605]: time="2025-11-04T05:01:41.403914950Z" level=info msg="StartContainer for \"b3779306c3f9dbc7a9f9092829c1b3e9215c9bb260a9fb31786aa63eaedc631c\"" Nov 4 05:01:41.405938 containerd[1605]: time="2025-11-04T05:01:41.405908643Z" level=info msg="connecting to shim b3779306c3f9dbc7a9f9092829c1b3e9215c9bb260a9fb31786aa63eaedc631c" address="unix:///run/containerd/s/a236b31a9139d943cd0a866dc3e0ccc3e4934c9e29eb8aee5aa8b10a2f41eb5d" protocol=ttrpc version=3 Nov 4 05:01:41.440367 systemd[1]: Started cri-containerd-b3779306c3f9dbc7a9f9092829c1b3e9215c9bb260a9fb31786aa63eaedc631c.scope - libcontainer container b3779306c3f9dbc7a9f9092829c1b3e9215c9bb260a9fb31786aa63eaedc631c. Nov 4 05:01:41.506064 containerd[1605]: time="2025-11-04T05:01:41.505995180Z" level=info msg="StartContainer for \"b3779306c3f9dbc7a9f9092829c1b3e9215c9bb260a9fb31786aa63eaedc631c\" returns successfully" Nov 4 05:01:41.535991 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 05:01:41.536415 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 05:01:41.536781 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 4 05:01:41.539488 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 05:01:41.542005 systemd[1]: cri-containerd-b3779306c3f9dbc7a9f9092829c1b3e9215c9bb260a9fb31786aa63eaedc631c.scope: Deactivated successfully. Nov 4 05:01:41.544375 containerd[1605]: time="2025-11-04T05:01:41.544352947Z" level=info msg="received exit event container_id:\"b3779306c3f9dbc7a9f9092829c1b3e9215c9bb260a9fb31786aa63eaedc631c\" id:\"b3779306c3f9dbc7a9f9092829c1b3e9215c9bb260a9fb31786aa63eaedc631c\" pid:3263 exited_at:{seconds:1762232501 nanos:543993984}" Nov 4 05:01:41.569300 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 05:01:41.874395 containerd[1605]: time="2025-11-04T05:01:41.874329656Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:01:41.875396 containerd[1605]: time="2025-11-04T05:01:41.875127465Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17532406" Nov 4 05:01:41.875933 containerd[1605]: time="2025-11-04T05:01:41.875897834Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:01:41.877218 containerd[1605]: time="2025-11-04T05:01:41.877188762Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.739377068s" Nov 4 05:01:41.877325 containerd[1605]: time="2025-11-04T05:01:41.877306866Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 4 05:01:41.884896 containerd[1605]: time="2025-11-04T05:01:41.884853365Z" level=info msg="CreateContainer within sandbox \"864e54aa58623fef61eb6f27accb4a6ee992419b082d178ec658b8746747bf62\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 4 05:01:41.894164 containerd[1605]: time="2025-11-04T05:01:41.894127217Z" level=info msg="Container b9fdda31c68c48a837bfbe53b0e378e189a799793bb105bac9a9a962e103b1c3: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:01:41.898894 containerd[1605]: time="2025-11-04T05:01:41.898847172Z" level=info msg="CreateContainer within sandbox \"864e54aa58623fef61eb6f27accb4a6ee992419b082d178ec658b8746747bf62\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b9fdda31c68c48a837bfbe53b0e378e189a799793bb105bac9a9a962e103b1c3\"" Nov 4 05:01:41.899629 containerd[1605]: time="2025-11-04T05:01:41.899577949Z" level=info msg="StartContainer for \"b9fdda31c68c48a837bfbe53b0e378e189a799793bb105bac9a9a962e103b1c3\"" Nov 4 05:01:41.900755 containerd[1605]: time="2025-11-04T05:01:41.900636528Z" level=info msg="connecting to shim b9fdda31c68c48a837bfbe53b0e378e189a799793bb105bac9a9a962e103b1c3" address="unix:///run/containerd/s/4dceee13ee9ee88d61ddf00e54491fc755e568ee09d5ff77fe8001c74618ec57" protocol=ttrpc version=3 Nov 4 05:01:41.921363 systemd[1]: Started cri-containerd-b9fdda31c68c48a837bfbe53b0e378e189a799793bb105bac9a9a962e103b1c3.scope - libcontainer container b9fdda31c68c48a837bfbe53b0e378e189a799793bb105bac9a9a962e103b1c3. Nov 4 05:01:41.958150 containerd[1605]: time="2025-11-04T05:01:41.958081860Z" level=info msg="StartContainer for \"b9fdda31c68c48a837bfbe53b0e378e189a799793bb105bac9a9a962e103b1c3\" returns successfully" Nov 4 05:01:42.151972 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3779306c3f9dbc7a9f9092829c1b3e9215c9bb260a9fb31786aa63eaedc631c-rootfs.mount: Deactivated successfully. Nov 4 05:01:42.367436 kubelet[2784]: E1104 05:01:42.367382 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:42.371436 kubelet[2784]: E1104 05:01:42.371414 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:42.374511 containerd[1605]: time="2025-11-04T05:01:42.374473616Z" level=info msg="CreateContainer within sandbox \"30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 4 05:01:42.392309 containerd[1605]: time="2025-11-04T05:01:42.389525224Z" level=info msg="Container 4ee993ad71746a304c9deb67ad5bc6afc85edf2d44dd6317d7c9da25279f99c1: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:01:42.398868 containerd[1605]: time="2025-11-04T05:01:42.398837100Z" level=info msg="CreateContainer within sandbox \"30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4ee993ad71746a304c9deb67ad5bc6afc85edf2d44dd6317d7c9da25279f99c1\"" Nov 4 05:01:42.399199 containerd[1605]: time="2025-11-04T05:01:42.399172092Z" level=info msg="StartContainer for \"4ee993ad71746a304c9deb67ad5bc6afc85edf2d44dd6317d7c9da25279f99c1\"" Nov 4 05:01:42.401288 containerd[1605]: time="2025-11-04T05:01:42.400216609Z" level=info msg="connecting to shim 4ee993ad71746a304c9deb67ad5bc6afc85edf2d44dd6317d7c9da25279f99c1" address="unix:///run/containerd/s/a236b31a9139d943cd0a866dc3e0ccc3e4934c9e29eb8aee5aa8b10a2f41eb5d" protocol=ttrpc version=3 Nov 4 05:01:42.442378 systemd[1]: Started cri-containerd-4ee993ad71746a304c9deb67ad5bc6afc85edf2d44dd6317d7c9da25279f99c1.scope - libcontainer container 4ee993ad71746a304c9deb67ad5bc6afc85edf2d44dd6317d7c9da25279f99c1. Nov 4 05:01:42.516270 kubelet[2784]: I1104 05:01:42.516131 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-zl62n" podStartSLOduration=2.636363728 podStartE2EDuration="9.51608762s" podCreationTimestamp="2025-11-04 05:01:33 +0000 UTC" firstStartedPulling="2025-11-04 05:01:34.99855825 +0000 UTC m=+7.888894027" lastFinishedPulling="2025-11-04 05:01:41.878282142 +0000 UTC m=+14.768617919" observedRunningTime="2025-11-04 05:01:42.442170529 +0000 UTC m=+15.332506306" watchObservedRunningTime="2025-11-04 05:01:42.51608762 +0000 UTC m=+15.406423387" Nov 4 05:01:42.585216 containerd[1605]: time="2025-11-04T05:01:42.585170402Z" level=info msg="StartContainer for \"4ee993ad71746a304c9deb67ad5bc6afc85edf2d44dd6317d7c9da25279f99c1\" returns successfully" Nov 4 05:01:42.598485 systemd[1]: cri-containerd-4ee993ad71746a304c9deb67ad5bc6afc85edf2d44dd6317d7c9da25279f99c1.scope: Deactivated successfully. Nov 4 05:01:42.599827 containerd[1605]: time="2025-11-04T05:01:42.599796034Z" level=info msg="received exit event container_id:\"4ee993ad71746a304c9deb67ad5bc6afc85edf2d44dd6317d7c9da25279f99c1\" id:\"4ee993ad71746a304c9deb67ad5bc6afc85edf2d44dd6317d7c9da25279f99c1\" pid:3350 exited_at:{seconds:1762232502 nanos:599451322}" Nov 4 05:01:43.147993 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ee993ad71746a304c9deb67ad5bc6afc85edf2d44dd6317d7c9da25279f99c1-rootfs.mount: Deactivated successfully. Nov 4 05:01:43.376609 kubelet[2784]: E1104 05:01:43.376555 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:43.377061 kubelet[2784]: E1104 05:01:43.377026 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:43.384253 containerd[1605]: time="2025-11-04T05:01:43.383484456Z" level=info msg="CreateContainer within sandbox \"30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 4 05:01:43.401289 containerd[1605]: time="2025-11-04T05:01:43.397561714Z" level=info msg="Container c61f11af4b0b45c39ebf64b8a253b80e07c87c172b79c293765b003a9bb85bdd: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:01:43.406982 containerd[1605]: time="2025-11-04T05:01:43.406952367Z" level=info msg="CreateContainer within sandbox \"30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c61f11af4b0b45c39ebf64b8a253b80e07c87c172b79c293765b003a9bb85bdd\"" Nov 4 05:01:43.408320 containerd[1605]: time="2025-11-04T05:01:43.408263330Z" level=info msg="StartContainer for \"c61f11af4b0b45c39ebf64b8a253b80e07c87c172b79c293765b003a9bb85bdd\"" Nov 4 05:01:43.409028 containerd[1605]: time="2025-11-04T05:01:43.408976364Z" level=info msg="connecting to shim c61f11af4b0b45c39ebf64b8a253b80e07c87c172b79c293765b003a9bb85bdd" address="unix:///run/containerd/s/a236b31a9139d943cd0a866dc3e0ccc3e4934c9e29eb8aee5aa8b10a2f41eb5d" protocol=ttrpc version=3 Nov 4 05:01:43.435392 systemd[1]: Started cri-containerd-c61f11af4b0b45c39ebf64b8a253b80e07c87c172b79c293765b003a9bb85bdd.scope - libcontainer container c61f11af4b0b45c39ebf64b8a253b80e07c87c172b79c293765b003a9bb85bdd. Nov 4 05:01:43.480527 systemd[1]: cri-containerd-c61f11af4b0b45c39ebf64b8a253b80e07c87c172b79c293765b003a9bb85bdd.scope: Deactivated successfully. Nov 4 05:01:43.484159 containerd[1605]: time="2025-11-04T05:01:43.484114285Z" level=info msg="received exit event container_id:\"c61f11af4b0b45c39ebf64b8a253b80e07c87c172b79c293765b003a9bb85bdd\" id:\"c61f11af4b0b45c39ebf64b8a253b80e07c87c172b79c293765b003a9bb85bdd\" pid:3392 exited_at:{seconds:1762232503 nanos:481930192}" Nov 4 05:01:43.485794 containerd[1605]: time="2025-11-04T05:01:43.485754199Z" level=info msg="StartContainer for \"c61f11af4b0b45c39ebf64b8a253b80e07c87c172b79c293765b003a9bb85bdd\" returns successfully" Nov 4 05:01:43.511499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c61f11af4b0b45c39ebf64b8a253b80e07c87c172b79c293765b003a9bb85bdd-rootfs.mount: Deactivated successfully. Nov 4 05:01:44.384118 kubelet[2784]: E1104 05:01:44.384059 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:44.391020 containerd[1605]: time="2025-11-04T05:01:44.390842656Z" level=info msg="CreateContainer within sandbox \"30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 4 05:01:44.409747 containerd[1605]: time="2025-11-04T05:01:44.409696272Z" level=info msg="Container 883c20c5bccd3410d0623131382683d519cc38efe02470041a547251448062b3: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:01:44.426190 containerd[1605]: time="2025-11-04T05:01:44.426111021Z" level=info msg="CreateContainer within sandbox \"30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"883c20c5bccd3410d0623131382683d519cc38efe02470041a547251448062b3\"" Nov 4 05:01:44.429503 containerd[1605]: time="2025-11-04T05:01:44.429357384Z" level=info msg="StartContainer for \"883c20c5bccd3410d0623131382683d519cc38efe02470041a547251448062b3\"" Nov 4 05:01:44.431541 containerd[1605]: time="2025-11-04T05:01:44.431496861Z" level=info msg="connecting to shim 883c20c5bccd3410d0623131382683d519cc38efe02470041a547251448062b3" address="unix:///run/containerd/s/a236b31a9139d943cd0a866dc3e0ccc3e4934c9e29eb8aee5aa8b10a2f41eb5d" protocol=ttrpc version=3 Nov 4 05:01:44.461413 systemd[1]: Started cri-containerd-883c20c5bccd3410d0623131382683d519cc38efe02470041a547251448062b3.scope - libcontainer container 883c20c5bccd3410d0623131382683d519cc38efe02470041a547251448062b3. Nov 4 05:01:44.518057 containerd[1605]: time="2025-11-04T05:01:44.518003257Z" level=info msg="StartContainer for \"883c20c5bccd3410d0623131382683d519cc38efe02470041a547251448062b3\" returns successfully" Nov 4 05:01:44.694323 kubelet[2784]: I1104 05:01:44.694200 2784 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 4 05:01:44.737617 systemd[1]: Created slice kubepods-burstable-pod9eabb735_d9b7_4b10_9689_6ea0e4de13ad.slice - libcontainer container kubepods-burstable-pod9eabb735_d9b7_4b10_9689_6ea0e4de13ad.slice. Nov 4 05:01:44.750776 systemd[1]: Created slice kubepods-burstable-pod744451a7_2355_4b0c_b339_cab5df53b09f.slice - libcontainer container kubepods-burstable-pod744451a7_2355_4b0c_b339_cab5df53b09f.slice. Nov 4 05:01:44.768352 update_engine[1586]: I20251104 05:01:44.768272 1586 update_attempter.cc:509] Updating boot flags... Nov 4 05:01:44.776329 kubelet[2784]: I1104 05:01:44.776170 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqkzz\" (UniqueName: \"kubernetes.io/projected/9eabb735-d9b7-4b10-9689-6ea0e4de13ad-kube-api-access-lqkzz\") pod \"coredns-66bc5c9577-nrg8g\" (UID: \"9eabb735-d9b7-4b10-9689-6ea0e4de13ad\") " pod="kube-system/coredns-66bc5c9577-nrg8g" Nov 4 05:01:44.776900 kubelet[2784]: I1104 05:01:44.776685 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/744451a7-2355-4b0c-b339-cab5df53b09f-config-volume\") pod \"coredns-66bc5c9577-w27g6\" (UID: \"744451a7-2355-4b0c-b339-cab5df53b09f\") " pod="kube-system/coredns-66bc5c9577-w27g6" Nov 4 05:01:44.778424 kubelet[2784]: I1104 05:01:44.776830 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2np7t\" (UniqueName: \"kubernetes.io/projected/744451a7-2355-4b0c-b339-cab5df53b09f-kube-api-access-2np7t\") pod \"coredns-66bc5c9577-w27g6\" (UID: \"744451a7-2355-4b0c-b339-cab5df53b09f\") " pod="kube-system/coredns-66bc5c9577-w27g6" Nov 4 05:01:44.778424 kubelet[2784]: I1104 05:01:44.778108 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9eabb735-d9b7-4b10-9689-6ea0e4de13ad-config-volume\") pod \"coredns-66bc5c9577-nrg8g\" (UID: \"9eabb735-d9b7-4b10-9689-6ea0e4de13ad\") " pod="kube-system/coredns-66bc5c9577-nrg8g" Nov 4 05:01:45.055976 kubelet[2784]: E1104 05:01:45.051702 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:45.058047 containerd[1605]: time="2025-11-04T05:01:45.057961353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nrg8g,Uid:9eabb735-d9b7-4b10-9689-6ea0e4de13ad,Namespace:kube-system,Attempt:0,}" Nov 4 05:01:45.063177 kubelet[2784]: E1104 05:01:45.063015 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:45.065013 containerd[1605]: time="2025-11-04T05:01:45.064499479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w27g6,Uid:744451a7-2355-4b0c-b339-cab5df53b09f,Namespace:kube-system,Attempt:0,}" Nov 4 05:01:45.392688 kubelet[2784]: E1104 05:01:45.392578 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:46.393622 kubelet[2784]: E1104 05:01:46.393583 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:47.044799 systemd-networkd[1518]: cilium_host: Link UP Nov 4 05:01:47.048463 systemd-networkd[1518]: cilium_net: Link UP Nov 4 05:01:47.049686 systemd-networkd[1518]: cilium_net: Gained carrier Nov 4 05:01:47.051837 systemd-networkd[1518]: cilium_host: Gained carrier Nov 4 05:01:47.191360 systemd-networkd[1518]: cilium_vxlan: Link UP Nov 4 05:01:47.191370 systemd-networkd[1518]: cilium_vxlan: Gained carrier Nov 4 05:01:47.271364 systemd-networkd[1518]: cilium_host: Gained IPv6LL Nov 4 05:01:47.397183 kubelet[2784]: E1104 05:01:47.397029 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:47.407670 systemd-networkd[1518]: cilium_net: Gained IPv6LL Nov 4 05:01:47.413270 kernel: NET: Registered PF_ALG protocol family Nov 4 05:01:48.062284 systemd-networkd[1518]: lxc_health: Link UP Nov 4 05:01:48.063582 systemd-networkd[1518]: lxc_health: Gained carrier Nov 4 05:01:48.591402 kubelet[2784]: E1104 05:01:48.591351 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:48.611602 kubelet[2784]: I1104 05:01:48.611181 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zf525" podStartSLOduration=11.195348873 podStartE2EDuration="16.611165905s" podCreationTimestamp="2025-11-04 05:01:32 +0000 UTC" firstStartedPulling="2025-11-04 05:01:34.721076616 +0000 UTC m=+7.611412373" lastFinishedPulling="2025-11-04 05:01:40.136893648 +0000 UTC m=+13.027229405" observedRunningTime="2025-11-04 05:01:45.41239875 +0000 UTC m=+18.302734517" watchObservedRunningTime="2025-11-04 05:01:48.611165905 +0000 UTC m=+21.501501682" Nov 4 05:01:48.646259 kernel: eth0: renamed from tmp2e949 Nov 4 05:01:48.647970 systemd-networkd[1518]: lxc85c97fab4629: Link UP Nov 4 05:01:48.648791 systemd-networkd[1518]: lxc85c97fab4629: Gained carrier Nov 4 05:01:48.683326 systemd-networkd[1518]: lxc72010b001e69: Link UP Nov 4 05:01:48.693268 kernel: eth0: renamed from tmpda5ad Nov 4 05:01:48.697310 systemd-networkd[1518]: lxc72010b001e69: Gained carrier Nov 4 05:01:48.799414 systemd-networkd[1518]: cilium_vxlan: Gained IPv6LL Nov 4 05:01:49.247590 systemd-networkd[1518]: lxc_health: Gained IPv6LL Nov 4 05:01:49.759974 systemd-networkd[1518]: lxc85c97fab4629: Gained IPv6LL Nov 4 05:01:50.144917 kubelet[2784]: I1104 05:01:50.143902 2784 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 05:01:50.145815 kubelet[2784]: E1104 05:01:50.145744 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:50.405318 kubelet[2784]: E1104 05:01:50.404216 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:50.719495 systemd-networkd[1518]: lxc72010b001e69: Gained IPv6LL Nov 4 05:01:51.875254 containerd[1605]: time="2025-11-04T05:01:51.874409427Z" level=info msg="connecting to shim 2e94955c148211d3a92442ab61ca62841d972ed25656c4aa39c7b044e7e9312a" address="unix:///run/containerd/s/6cb48fdfdf94417448eaf27797b442d5092ca61a06b6d0a282805725fa384819" namespace=k8s.io protocol=ttrpc version=3 Nov 4 05:01:51.899531 containerd[1605]: time="2025-11-04T05:01:51.899468123Z" level=info msg="connecting to shim da5ad847468124a16bc26d46ee72e6d6935cd5704b067c5eb9e15b400f4e545d" address="unix:///run/containerd/s/f80a99d576b4ab708ab194ac62e31efaf6737d562c13d73942b63134b49232b2" namespace=k8s.io protocol=ttrpc version=3 Nov 4 05:01:51.932428 systemd[1]: Started cri-containerd-2e94955c148211d3a92442ab61ca62841d972ed25656c4aa39c7b044e7e9312a.scope - libcontainer container 2e94955c148211d3a92442ab61ca62841d972ed25656c4aa39c7b044e7e9312a. Nov 4 05:01:51.953434 systemd[1]: Started cri-containerd-da5ad847468124a16bc26d46ee72e6d6935cd5704b067c5eb9e15b400f4e545d.scope - libcontainer container da5ad847468124a16bc26d46ee72e6d6935cd5704b067c5eb9e15b400f4e545d. Nov 4 05:01:52.015268 containerd[1605]: time="2025-11-04T05:01:52.014987598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w27g6,Uid:744451a7-2355-4b0c-b339-cab5df53b09f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e94955c148211d3a92442ab61ca62841d972ed25656c4aa39c7b044e7e9312a\"" Nov 4 05:01:52.017528 kubelet[2784]: E1104 05:01:52.017458 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:52.022689 containerd[1605]: time="2025-11-04T05:01:52.022558742Z" level=info msg="CreateContainer within sandbox \"2e94955c148211d3a92442ab61ca62841d972ed25656c4aa39c7b044e7e9312a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 05:01:52.055285 containerd[1605]: time="2025-11-04T05:01:52.055245318Z" level=info msg="Container 12e0197573562a91c98c60eaaa8f4d078a9c3005e6413623a76cd02984122373: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:01:52.063929 containerd[1605]: time="2025-11-04T05:01:52.063865874Z" level=info msg="CreateContainer within sandbox \"2e94955c148211d3a92442ab61ca62841d972ed25656c4aa39c7b044e7e9312a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"12e0197573562a91c98c60eaaa8f4d078a9c3005e6413623a76cd02984122373\"" Nov 4 05:01:52.065286 containerd[1605]: time="2025-11-04T05:01:52.064541648Z" level=info msg="StartContainer for \"12e0197573562a91c98c60eaaa8f4d078a9c3005e6413623a76cd02984122373\"" Nov 4 05:01:52.065477 containerd[1605]: time="2025-11-04T05:01:52.065456458Z" level=info msg="connecting to shim 12e0197573562a91c98c60eaaa8f4d078a9c3005e6413623a76cd02984122373" address="unix:///run/containerd/s/6cb48fdfdf94417448eaf27797b442d5092ca61a06b6d0a282805725fa384819" protocol=ttrpc version=3 Nov 4 05:01:52.089553 containerd[1605]: time="2025-11-04T05:01:52.089470127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nrg8g,Uid:9eabb735-d9b7-4b10-9689-6ea0e4de13ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"da5ad847468124a16bc26d46ee72e6d6935cd5704b067c5eb9e15b400f4e545d\"" Nov 4 05:01:52.091165 kubelet[2784]: E1104 05:01:52.091122 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:52.099037 containerd[1605]: time="2025-11-04T05:01:52.098971452Z" level=info msg="CreateContainer within sandbox \"da5ad847468124a16bc26d46ee72e6d6935cd5704b067c5eb9e15b400f4e545d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 05:01:52.109641 systemd[1]: Started cri-containerd-12e0197573562a91c98c60eaaa8f4d078a9c3005e6413623a76cd02984122373.scope - libcontainer container 12e0197573562a91c98c60eaaa8f4d078a9c3005e6413623a76cd02984122373. Nov 4 05:01:52.110481 containerd[1605]: time="2025-11-04T05:01:52.110335237Z" level=info msg="Container 0f524a4f40513b313e2611bcc4240e979075513c4fc64871cf6248b0abe66c19: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:01:52.116176 containerd[1605]: time="2025-11-04T05:01:52.116135602Z" level=info msg="CreateContainer within sandbox \"da5ad847468124a16bc26d46ee72e6d6935cd5704b067c5eb9e15b400f4e545d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0f524a4f40513b313e2611bcc4240e979075513c4fc64871cf6248b0abe66c19\"" Nov 4 05:01:52.116839 containerd[1605]: time="2025-11-04T05:01:52.116807887Z" level=info msg="StartContainer for \"0f524a4f40513b313e2611bcc4240e979075513c4fc64871cf6248b0abe66c19\"" Nov 4 05:01:52.118415 containerd[1605]: time="2025-11-04T05:01:52.118371311Z" level=info msg="connecting to shim 0f524a4f40513b313e2611bcc4240e979075513c4fc64871cf6248b0abe66c19" address="unix:///run/containerd/s/f80a99d576b4ab708ab194ac62e31efaf6737d562c13d73942b63134b49232b2" protocol=ttrpc version=3 Nov 4 05:01:52.146443 systemd[1]: Started cri-containerd-0f524a4f40513b313e2611bcc4240e979075513c4fc64871cf6248b0abe66c19.scope - libcontainer container 0f524a4f40513b313e2611bcc4240e979075513c4fc64871cf6248b0abe66c19. Nov 4 05:01:52.184924 containerd[1605]: time="2025-11-04T05:01:52.184755534Z" level=info msg="StartContainer for \"12e0197573562a91c98c60eaaa8f4d078a9c3005e6413623a76cd02984122373\" returns successfully" Nov 4 05:01:52.201611 containerd[1605]: time="2025-11-04T05:01:52.201512236Z" level=info msg="StartContainer for \"0f524a4f40513b313e2611bcc4240e979075513c4fc64871cf6248b0abe66c19\" returns successfully" Nov 4 05:01:52.410316 kubelet[2784]: E1104 05:01:52.410157 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:52.416293 kubelet[2784]: E1104 05:01:52.416251 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:52.443664 kubelet[2784]: I1104 05:01:52.443607 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nrg8g" podStartSLOduration=19.443596463 podStartE2EDuration="19.443596463s" podCreationTimestamp="2025-11-04 05:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 05:01:52.427514206 +0000 UTC m=+25.317849983" watchObservedRunningTime="2025-11-04 05:01:52.443596463 +0000 UTC m=+25.333932220" Nov 4 05:01:52.443813 kubelet[2784]: I1104 05:01:52.443749 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-w27g6" podStartSLOduration=19.443744416 podStartE2EDuration="19.443744416s" podCreationTimestamp="2025-11-04 05:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 05:01:52.442905818 +0000 UTC m=+25.333241575" watchObservedRunningTime="2025-11-04 05:01:52.443744416 +0000 UTC m=+25.334080173" Nov 4 05:01:52.852766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1134361507.mount: Deactivated successfully. Nov 4 05:01:53.421376 kubelet[2784]: E1104 05:01:53.420925 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:53.421376 kubelet[2784]: E1104 05:01:53.421041 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:01:54.422852 kubelet[2784]: E1104 05:01:54.422823 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:02:51.256104 kubelet[2784]: E1104 05:02:51.255321 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:02:54.255635 kubelet[2784]: E1104 05:02:54.255539 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:03:00.254990 kubelet[2784]: E1104 05:03:00.254913 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:03:02.255505 kubelet[2784]: E1104 05:03:02.255413 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:03:03.257685 kubelet[2784]: E1104 05:03:03.257608 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:03:06.254825 kubelet[2784]: E1104 05:03:06.254742 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:03:08.254708 kubelet[2784]: E1104 05:03:08.254672 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:03:18.255333 kubelet[2784]: E1104 05:03:18.255196 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:03:33.593483 systemd[1]: Started sshd@7-172.237.150.130:22-139.178.89.65:59052.service - OpenSSH per-connection server daemon (139.178.89.65:59052). Nov 4 05:03:33.890402 sshd[4125]: Accepted publickey for core from 139.178.89.65 port 59052 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:03:33.891942 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:03:33.898110 systemd-logind[1584]: New session 8 of user core. Nov 4 05:03:33.902356 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 4 05:03:34.121287 sshd[4128]: Connection closed by 139.178.89.65 port 59052 Nov 4 05:03:34.123266 sshd-session[4125]: pam_unix(sshd:session): session closed for user core Nov 4 05:03:34.127863 systemd[1]: sshd@7-172.237.150.130:22-139.178.89.65:59052.service: Deactivated successfully. Nov 4 05:03:34.130758 systemd[1]: session-8.scope: Deactivated successfully. Nov 4 05:03:34.132066 systemd-logind[1584]: Session 8 logged out. Waiting for processes to exit. Nov 4 05:03:34.133533 systemd-logind[1584]: Removed session 8. Nov 4 05:03:39.184812 systemd[1]: Started sshd@8-172.237.150.130:22-139.178.89.65:47418.service - OpenSSH per-connection server daemon (139.178.89.65:47418). Nov 4 05:03:39.473523 sshd[4144]: Accepted publickey for core from 139.178.89.65 port 47418 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:03:39.475025 sshd-session[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:03:39.479843 systemd-logind[1584]: New session 9 of user core. Nov 4 05:03:39.489364 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 4 05:03:39.695383 sshd[4147]: Connection closed by 139.178.89.65 port 47418 Nov 4 05:03:39.696095 sshd-session[4144]: pam_unix(sshd:session): session closed for user core Nov 4 05:03:39.700565 systemd-logind[1584]: Session 9 logged out. Waiting for processes to exit. Nov 4 05:03:39.700840 systemd[1]: sshd@8-172.237.150.130:22-139.178.89.65:47418.service: Deactivated successfully. Nov 4 05:03:39.703033 systemd[1]: session-9.scope: Deactivated successfully. Nov 4 05:03:39.704998 systemd-logind[1584]: Removed session 9. Nov 4 05:03:44.761964 systemd[1]: Started sshd@9-172.237.150.130:22-139.178.89.65:47434.service - OpenSSH per-connection server daemon (139.178.89.65:47434). Nov 4 05:03:45.057340 sshd[4160]: Accepted publickey for core from 139.178.89.65 port 47434 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:03:45.058547 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:03:45.065120 systemd-logind[1584]: New session 10 of user core. Nov 4 05:03:45.073482 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 4 05:03:45.285263 sshd[4163]: Connection closed by 139.178.89.65 port 47434 Nov 4 05:03:45.285688 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Nov 4 05:03:45.291704 systemd[1]: sshd@9-172.237.150.130:22-139.178.89.65:47434.service: Deactivated successfully. Nov 4 05:03:45.295890 systemd[1]: session-10.scope: Deactivated successfully. Nov 4 05:03:45.298411 systemd-logind[1584]: Session 10 logged out. Waiting for processes to exit. Nov 4 05:03:45.300132 systemd-logind[1584]: Removed session 10. Nov 4 05:03:45.345361 systemd[1]: Started sshd@10-172.237.150.130:22-139.178.89.65:47442.service - OpenSSH per-connection server daemon (139.178.89.65:47442). Nov 4 05:03:45.638203 sshd[4175]: Accepted publickey for core from 139.178.89.65 port 47442 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:03:45.639790 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:03:45.645300 systemd-logind[1584]: New session 11 of user core. Nov 4 05:03:45.650365 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 4 05:03:45.905533 sshd[4178]: Connection closed by 139.178.89.65 port 47442 Nov 4 05:03:45.906504 sshd-session[4175]: pam_unix(sshd:session): session closed for user core Nov 4 05:03:45.911102 systemd[1]: sshd@10-172.237.150.130:22-139.178.89.65:47442.service: Deactivated successfully. Nov 4 05:03:45.913675 systemd[1]: session-11.scope: Deactivated successfully. Nov 4 05:03:45.914954 systemd-logind[1584]: Session 11 logged out. Waiting for processes to exit. Nov 4 05:03:45.917023 systemd-logind[1584]: Removed session 11. Nov 4 05:03:45.971063 systemd[1]: Started sshd@11-172.237.150.130:22-139.178.89.65:47458.service - OpenSSH per-connection server daemon (139.178.89.65:47458). Nov 4 05:03:46.267045 sshd[4187]: Accepted publickey for core from 139.178.89.65 port 47458 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:03:46.268998 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:03:46.275982 systemd-logind[1584]: New session 12 of user core. Nov 4 05:03:46.283842 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 4 05:03:46.483990 sshd[4190]: Connection closed by 139.178.89.65 port 47458 Nov 4 05:03:46.484827 sshd-session[4187]: pam_unix(sshd:session): session closed for user core Nov 4 05:03:46.490448 systemd[1]: sshd@11-172.237.150.130:22-139.178.89.65:47458.service: Deactivated successfully. Nov 4 05:03:46.493940 systemd[1]: session-12.scope: Deactivated successfully. Nov 4 05:03:46.496629 systemd-logind[1584]: Session 12 logged out. Waiting for processes to exit. Nov 4 05:03:46.498771 systemd-logind[1584]: Removed session 12. Nov 4 05:03:51.562749 systemd[1]: Started sshd@12-172.237.150.130:22-139.178.89.65:40592.service - OpenSSH per-connection server daemon (139.178.89.65:40592). Nov 4 05:03:51.883016 sshd[4202]: Accepted publickey for core from 139.178.89.65 port 40592 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:03:51.883971 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:03:51.889665 systemd-logind[1584]: New session 13 of user core. Nov 4 05:03:51.894359 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 4 05:03:52.101471 sshd[4205]: Connection closed by 139.178.89.65 port 40592 Nov 4 05:03:52.102326 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Nov 4 05:03:52.106536 systemd-logind[1584]: Session 13 logged out. Waiting for processes to exit. Nov 4 05:03:52.106779 systemd[1]: sshd@12-172.237.150.130:22-139.178.89.65:40592.service: Deactivated successfully. Nov 4 05:03:52.113992 systemd[1]: session-13.scope: Deactivated successfully. Nov 4 05:03:52.116302 systemd-logind[1584]: Removed session 13. Nov 4 05:03:57.175452 systemd[1]: Started sshd@13-172.237.150.130:22-139.178.89.65:47272.service - OpenSSH per-connection server daemon (139.178.89.65:47272). Nov 4 05:03:57.484516 sshd[4217]: Accepted publickey for core from 139.178.89.65 port 47272 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:03:57.486267 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:03:57.491910 systemd-logind[1584]: New session 14 of user core. Nov 4 05:03:57.501470 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 4 05:03:57.708984 sshd[4220]: Connection closed by 139.178.89.65 port 47272 Nov 4 05:03:57.707850 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Nov 4 05:03:57.715850 systemd[1]: sshd@13-172.237.150.130:22-139.178.89.65:47272.service: Deactivated successfully. Nov 4 05:03:57.718895 systemd[1]: session-14.scope: Deactivated successfully. Nov 4 05:03:57.720643 systemd-logind[1584]: Session 14 logged out. Waiting for processes to exit. Nov 4 05:03:57.722715 systemd-logind[1584]: Removed session 14. Nov 4 05:03:57.771554 systemd[1]: Started sshd@14-172.237.150.130:22-139.178.89.65:47280.service - OpenSSH per-connection server daemon (139.178.89.65:47280). Nov 4 05:03:58.081958 sshd[4232]: Accepted publickey for core from 139.178.89.65 port 47280 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:03:58.083748 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:03:58.090338 systemd-logind[1584]: New session 15 of user core. Nov 4 05:03:58.097424 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 4 05:03:58.338699 sshd[4235]: Connection closed by 139.178.89.65 port 47280 Nov 4 05:03:58.339993 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Nov 4 05:03:58.346797 systemd[1]: sshd@14-172.237.150.130:22-139.178.89.65:47280.service: Deactivated successfully. Nov 4 05:03:58.350250 systemd[1]: session-15.scope: Deactivated successfully. Nov 4 05:03:58.351513 systemd-logind[1584]: Session 15 logged out. Waiting for processes to exit. Nov 4 05:03:58.353889 systemd-logind[1584]: Removed session 15. Nov 4 05:03:58.402713 systemd[1]: Started sshd@15-172.237.150.130:22-139.178.89.65:47284.service - OpenSSH per-connection server daemon (139.178.89.65:47284). Nov 4 05:03:58.718657 sshd[4245]: Accepted publickey for core from 139.178.89.65 port 47284 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:03:58.720890 sshd-session[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:03:58.727321 systemd-logind[1584]: New session 16 of user core. Nov 4 05:03:58.731384 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 4 05:03:59.432644 sshd[4248]: Connection closed by 139.178.89.65 port 47284 Nov 4 05:03:59.433335 sshd-session[4245]: pam_unix(sshd:session): session closed for user core Nov 4 05:03:59.437910 systemd[1]: sshd@15-172.237.150.130:22-139.178.89.65:47284.service: Deactivated successfully. Nov 4 05:03:59.440928 systemd[1]: session-16.scope: Deactivated successfully. Nov 4 05:03:59.443648 systemd-logind[1584]: Session 16 logged out. Waiting for processes to exit. Nov 4 05:03:59.444634 systemd-logind[1584]: Removed session 16. Nov 4 05:03:59.497485 systemd[1]: Started sshd@16-172.237.150.130:22-139.178.89.65:47296.service - OpenSSH per-connection server daemon (139.178.89.65:47296). Nov 4 05:03:59.788348 sshd[4263]: Accepted publickey for core from 139.178.89.65 port 47296 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:03:59.790503 sshd-session[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:03:59.797965 systemd-logind[1584]: New session 17 of user core. Nov 4 05:03:59.805563 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 4 05:04:00.096835 sshd[4266]: Connection closed by 139.178.89.65 port 47296 Nov 4 05:04:00.097576 sshd-session[4263]: pam_unix(sshd:session): session closed for user core Nov 4 05:04:00.101786 systemd[1]: sshd@16-172.237.150.130:22-139.178.89.65:47296.service: Deactivated successfully. Nov 4 05:04:00.102047 systemd-logind[1584]: Session 17 logged out. Waiting for processes to exit. Nov 4 05:04:00.104139 systemd[1]: session-17.scope: Deactivated successfully. Nov 4 05:04:00.106211 systemd-logind[1584]: Removed session 17. Nov 4 05:04:00.158100 systemd[1]: Started sshd@17-172.237.150.130:22-139.178.89.65:47308.service - OpenSSH per-connection server daemon (139.178.89.65:47308). Nov 4 05:04:00.446513 sshd[4276]: Accepted publickey for core from 139.178.89.65 port 47308 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:04:00.448418 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:04:00.456016 systemd-logind[1584]: New session 18 of user core. Nov 4 05:04:00.460462 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 4 05:04:00.665443 sshd[4281]: Connection closed by 139.178.89.65 port 47308 Nov 4 05:04:00.666107 sshd-session[4276]: pam_unix(sshd:session): session closed for user core Nov 4 05:04:00.671955 systemd[1]: sshd@17-172.237.150.130:22-139.178.89.65:47308.service: Deactivated successfully. Nov 4 05:04:00.675277 systemd[1]: session-18.scope: Deactivated successfully. Nov 4 05:04:00.677116 systemd-logind[1584]: Session 18 logged out. Waiting for processes to exit. Nov 4 05:04:00.678976 systemd-logind[1584]: Removed session 18. Nov 4 05:04:02.255499 kubelet[2784]: E1104 05:04:02.255453 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:04:05.734633 systemd[1]: Started sshd@18-172.237.150.130:22-139.178.89.65:47320.service - OpenSSH per-connection server daemon (139.178.89.65:47320). Nov 4 05:04:06.032204 sshd[4297]: Accepted publickey for core from 139.178.89.65 port 47320 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:04:06.033753 sshd-session[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:04:06.039257 systemd-logind[1584]: New session 19 of user core. Nov 4 05:04:06.043381 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 4 05:04:06.274699 sshd[4300]: Connection closed by 139.178.89.65 port 47320 Nov 4 05:04:06.275346 sshd-session[4297]: pam_unix(sshd:session): session closed for user core Nov 4 05:04:06.279910 systemd-logind[1584]: Session 19 logged out. Waiting for processes to exit. Nov 4 05:04:06.280604 systemd[1]: sshd@18-172.237.150.130:22-139.178.89.65:47320.service: Deactivated successfully. Nov 4 05:04:06.283076 systemd[1]: session-19.scope: Deactivated successfully. Nov 4 05:04:06.285474 systemd-logind[1584]: Removed session 19. Nov 4 05:04:07.255183 kubelet[2784]: E1104 05:04:07.254877 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:04:11.255636 kubelet[2784]: E1104 05:04:11.255259 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:04:11.341308 systemd[1]: Started sshd@19-172.237.150.130:22-139.178.89.65:35284.service - OpenSSH per-connection server daemon (139.178.89.65:35284). Nov 4 05:04:11.647403 sshd[4312]: Accepted publickey for core from 139.178.89.65 port 35284 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:04:11.649433 sshd-session[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:04:11.655291 systemd-logind[1584]: New session 20 of user core. Nov 4 05:04:11.660611 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 4 05:04:11.879631 sshd[4315]: Connection closed by 139.178.89.65 port 35284 Nov 4 05:04:11.880425 sshd-session[4312]: pam_unix(sshd:session): session closed for user core Nov 4 05:04:11.885679 systemd-logind[1584]: Session 20 logged out. Waiting for processes to exit. Nov 4 05:04:11.886039 systemd[1]: sshd@19-172.237.150.130:22-139.178.89.65:35284.service: Deactivated successfully. Nov 4 05:04:11.891631 systemd[1]: session-20.scope: Deactivated successfully. Nov 4 05:04:11.893624 systemd-logind[1584]: Removed session 20. Nov 4 05:04:11.957202 systemd[1]: Started sshd@20-172.237.150.130:22-139.178.89.65:35288.service - OpenSSH per-connection server daemon (139.178.89.65:35288). Nov 4 05:04:12.255121 kubelet[2784]: E1104 05:04:12.254940 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:04:12.274790 sshd[4327]: Accepted publickey for core from 139.178.89.65 port 35288 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:04:12.276882 sshd-session[4327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:04:12.283178 systemd-logind[1584]: New session 21 of user core. Nov 4 05:04:12.290391 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 4 05:04:13.689748 containerd[1605]: time="2025-11-04T05:04:13.689686271Z" level=info msg="StopContainer for \"b9fdda31c68c48a837bfbe53b0e378e189a799793bb105bac9a9a962e103b1c3\" with timeout 30 (s)" Nov 4 05:04:13.692624 containerd[1605]: time="2025-11-04T05:04:13.692410905Z" level=info msg="Stop container \"b9fdda31c68c48a837bfbe53b0e378e189a799793bb105bac9a9a962e103b1c3\" with signal terminated" Nov 4 05:04:13.708526 containerd[1605]: time="2025-11-04T05:04:13.708478452Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 05:04:13.724729 containerd[1605]: time="2025-11-04T05:04:13.724687398Z" level=info msg="StopContainer for \"883c20c5bccd3410d0623131382683d519cc38efe02470041a547251448062b3\" with timeout 2 (s)" Nov 4 05:04:13.726273 containerd[1605]: time="2025-11-04T05:04:13.725455550Z" level=info msg="Stop container \"883c20c5bccd3410d0623131382683d519cc38efe02470041a547251448062b3\" with signal terminated" Nov 4 05:04:13.738708 systemd-networkd[1518]: lxc_health: Link DOWN Nov 4 05:04:13.738741 systemd-networkd[1518]: lxc_health: Lost carrier Nov 4 05:04:13.746394 systemd[1]: cri-containerd-b9fdda31c68c48a837bfbe53b0e378e189a799793bb105bac9a9a962e103b1c3.scope: Deactivated successfully. Nov 4 05:04:13.751634 containerd[1605]: time="2025-11-04T05:04:13.751305692Z" level=info msg="received exit event container_id:\"b9fdda31c68c48a837bfbe53b0e378e189a799793bb105bac9a9a962e103b1c3\" id:\"b9fdda31c68c48a837bfbe53b0e378e189a799793bb105bac9a9a962e103b1c3\" pid:3317 exited_at:{seconds:1762232653 nanos:750769321}" Nov 4 05:04:13.772714 systemd[1]: cri-containerd-883c20c5bccd3410d0623131382683d519cc38efe02470041a547251448062b3.scope: Deactivated successfully. Nov 4 05:04:13.773358 systemd[1]: cri-containerd-883c20c5bccd3410d0623131382683d519cc38efe02470041a547251448062b3.scope: Consumed 6.400s CPU time, 123.2M memory peak, 136K read from disk, 13.3M written to disk. Nov 4 05:04:13.775708 containerd[1605]: time="2025-11-04T05:04:13.775626702Z" level=info msg="received exit event container_id:\"883c20c5bccd3410d0623131382683d519cc38efe02470041a547251448062b3\" id:\"883c20c5bccd3410d0623131382683d519cc38efe02470041a547251448062b3\" pid:3429 exited_at:{seconds:1762232653 nanos:775126691}" Nov 4 05:04:13.791789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9fdda31c68c48a837bfbe53b0e378e189a799793bb105bac9a9a962e103b1c3-rootfs.mount: Deactivated successfully. Nov 4 05:04:13.805517 containerd[1605]: time="2025-11-04T05:04:13.805472641Z" level=info msg="StopContainer for \"b9fdda31c68c48a837bfbe53b0e378e189a799793bb105bac9a9a962e103b1c3\" returns successfully" Nov 4 05:04:13.808905 containerd[1605]: time="2025-11-04T05:04:13.808568576Z" level=info msg="StopPodSandbox for \"864e54aa58623fef61eb6f27accb4a6ee992419b082d178ec658b8746747bf62\"" Nov 4 05:04:13.808905 containerd[1605]: time="2025-11-04T05:04:13.808623206Z" level=info msg="Container to stop \"b9fdda31c68c48a837bfbe53b0e378e189a799793bb105bac9a9a962e103b1c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 05:04:13.814796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-883c20c5bccd3410d0623131382683d519cc38efe02470041a547251448062b3-rootfs.mount: Deactivated successfully. Nov 4 05:04:13.821762 containerd[1605]: time="2025-11-04T05:04:13.821713608Z" level=info msg="StopContainer for \"883c20c5bccd3410d0623131382683d519cc38efe02470041a547251448062b3\" returns successfully" Nov 4 05:04:13.824274 containerd[1605]: time="2025-11-04T05:04:13.824196882Z" level=info msg="StopPodSandbox for \"30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab\"" Nov 4 05:04:13.824668 containerd[1605]: time="2025-11-04T05:04:13.824528182Z" level=info msg="Container to stop \"2669eb3de51afd4a7752c9fad0e497123ee0e1d4096dbe15337086563217b578\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 05:04:13.824729 containerd[1605]: time="2025-11-04T05:04:13.824667282Z" level=info msg="Container to stop \"b3779306c3f9dbc7a9f9092829c1b3e9215c9bb260a9fb31786aa63eaedc631c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 05:04:13.824729 containerd[1605]: time="2025-11-04T05:04:13.824682302Z" level=info msg="Container to stop \"4ee993ad71746a304c9deb67ad5bc6afc85edf2d44dd6317d7c9da25279f99c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 05:04:13.824729 containerd[1605]: time="2025-11-04T05:04:13.824692882Z" level=info msg="Container to stop \"c61f11af4b0b45c39ebf64b8a253b80e07c87c172b79c293765b003a9bb85bdd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 05:04:13.824729 containerd[1605]: time="2025-11-04T05:04:13.824701342Z" level=info msg="Container to stop \"883c20c5bccd3410d0623131382683d519cc38efe02470041a547251448062b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 05:04:13.836617 systemd[1]: cri-containerd-864e54aa58623fef61eb6f27accb4a6ee992419b082d178ec658b8746747bf62.scope: Deactivated successfully. Nov 4 05:04:13.841063 systemd[1]: cri-containerd-30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab.scope: Deactivated successfully. Nov 4 05:04:13.888028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab-rootfs.mount: Deactivated successfully. Nov 4 05:04:13.892369 containerd[1605]: time="2025-11-04T05:04:13.892341703Z" level=info msg="shim disconnected" id=30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab namespace=k8s.io Nov 4 05:04:13.892593 containerd[1605]: time="2025-11-04T05:04:13.892525414Z" level=info msg="cleaning up after shim disconnected" id=30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab namespace=k8s.io Nov 4 05:04:13.892593 containerd[1605]: time="2025-11-04T05:04:13.892542534Z" level=info msg="cleaning up dead shim" id=30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab namespace=k8s.io Nov 4 05:04:13.899736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-864e54aa58623fef61eb6f27accb4a6ee992419b082d178ec658b8746747bf62-rootfs.mount: Deactivated successfully. Nov 4 05:04:13.902580 containerd[1605]: time="2025-11-04T05:04:13.902511540Z" level=info msg="shim disconnected" id=864e54aa58623fef61eb6f27accb4a6ee992419b082d178ec658b8746747bf62 namespace=k8s.io Nov 4 05:04:13.902580 containerd[1605]: time="2025-11-04T05:04:13.902558160Z" level=info msg="cleaning up after shim disconnected" id=864e54aa58623fef61eb6f27accb4a6ee992419b082d178ec658b8746747bf62 namespace=k8s.io Nov 4 05:04:13.902680 containerd[1605]: time="2025-11-04T05:04:13.902571500Z" level=info msg="cleaning up dead shim" id=864e54aa58623fef61eb6f27accb4a6ee992419b082d178ec658b8746747bf62 namespace=k8s.io Nov 4 05:04:13.921258 containerd[1605]: time="2025-11-04T05:04:13.919631608Z" level=info msg="received exit event sandbox_id:\"30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab\" exit_status:137 exited_at:{seconds:1762232653 nanos:849710333}" Nov 4 05:04:13.922530 containerd[1605]: time="2025-11-04T05:04:13.920604230Z" level=info msg="TearDown network for sandbox \"30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab\" successfully" Nov 4 05:04:13.922578 containerd[1605]: time="2025-11-04T05:04:13.922541853Z" level=info msg="StopPodSandbox for \"30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab\" returns successfully" Nov 4 05:04:13.923041 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab-shm.mount: Deactivated successfully. Nov 4 05:04:13.926428 containerd[1605]: time="2025-11-04T05:04:13.926374919Z" level=error msg="Failed to handle event container_id:\"864e54aa58623fef61eb6f27accb4a6ee992419b082d178ec658b8746747bf62\" id:\"864e54aa58623fef61eb6f27accb4a6ee992419b082d178ec658b8746747bf62\" pid:3026 exit_status:137 exited_at:{seconds:1762232653 nanos:839533347} for 864e54aa58623fef61eb6f27accb4a6ee992419b082d178ec658b8746747bf62" error="failed to handle container TaskExit event: failed to stop sandbox: failed to delete task: ttrpc: closed" Nov 4 05:04:13.926671 containerd[1605]: time="2025-11-04T05:04:13.926475349Z" level=info msg="received exit event sandbox_id:\"864e54aa58623fef61eb6f27accb4a6ee992419b082d178ec658b8746747bf62\" exit_status:137 exited_at:{seconds:1762232653 nanos:839533347}" Nov 4 05:04:13.927641 containerd[1605]: time="2025-11-04T05:04:13.927605481Z" level=info msg="TearDown network for sandbox \"864e54aa58623fef61eb6f27accb4a6ee992419b082d178ec658b8746747bf62\" successfully" Nov 4 05:04:13.927823 containerd[1605]: time="2025-11-04T05:04:13.927650121Z" level=info msg="StopPodSandbox for \"864e54aa58623fef61eb6f27accb4a6ee992419b082d178ec658b8746747bf62\" returns successfully" Nov 4 05:04:13.992122 kubelet[2784]: I1104 05:04:13.992059 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3" (UID: "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 05:04:13.992122 kubelet[2784]: I1104 05:04:13.992107 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-cilium-run\") pod \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " Nov 4 05:04:13.992628 kubelet[2784]: I1104 05:04:13.992136 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-xtables-lock\") pod \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " Nov 4 05:04:13.992628 kubelet[2784]: I1104 05:04:13.992154 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-etc-cni-netd\") pod \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " Nov 4 05:04:13.992628 kubelet[2784]: I1104 05:04:13.992178 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkjxh\" (UniqueName: \"kubernetes.io/projected/555c0676-a74e-4bdd-92e4-6446cd997796-kube-api-access-rkjxh\") pod \"555c0676-a74e-4bdd-92e4-6446cd997796\" (UID: \"555c0676-a74e-4bdd-92e4-6446cd997796\") " Nov 4 05:04:13.992628 kubelet[2784]: I1104 05:04:13.992196 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-cilium-cgroup\") pod \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " Nov 4 05:04:13.992628 kubelet[2784]: I1104 05:04:13.992215 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-clustermesh-secrets\") pod \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " Nov 4 05:04:13.992628 kubelet[2784]: I1104 05:04:13.992273 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-host-proc-sys-kernel\") pod \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " Nov 4 05:04:13.992869 kubelet[2784]: I1104 05:04:13.992298 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9rl7\" (UniqueName: \"kubernetes.io/projected/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-kube-api-access-g9rl7\") pod \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " Nov 4 05:04:13.992869 kubelet[2784]: I1104 05:04:13.992318 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-hubble-tls\") pod \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " Nov 4 05:04:13.992869 kubelet[2784]: I1104 05:04:13.992357 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-hostproc\") pod \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " Nov 4 05:04:13.992869 kubelet[2784]: I1104 05:04:13.992377 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-cilium-config-path\") pod \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " Nov 4 05:04:13.992869 kubelet[2784]: I1104 05:04:13.992393 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-cni-path\") pod \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " Nov 4 05:04:13.992869 kubelet[2784]: I1104 05:04:13.992412 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/555c0676-a74e-4bdd-92e4-6446cd997796-cilium-config-path\") pod \"555c0676-a74e-4bdd-92e4-6446cd997796\" (UID: \"555c0676-a74e-4bdd-92e4-6446cd997796\") " Nov 4 05:04:13.993163 kubelet[2784]: I1104 05:04:13.992432 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-bpf-maps\") pod \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " Nov 4 05:04:13.993163 kubelet[2784]: I1104 05:04:13.992450 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-lib-modules\") pod \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " Nov 4 05:04:13.993163 kubelet[2784]: I1104 05:04:13.992466 2784 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-host-proc-sys-net\") pod \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\" (UID: \"eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3\") " Nov 4 05:04:13.993163 kubelet[2784]: I1104 05:04:13.992494 2784 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-cilium-run\") on node \"172-237-150-130\" DevicePath \"\"" Nov 4 05:04:13.993163 kubelet[2784]: I1104 05:04:13.992513 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3" (UID: "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 05:04:13.993163 kubelet[2784]: I1104 05:04:13.992530 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3" (UID: "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 05:04:13.993498 kubelet[2784]: I1104 05:04:13.992543 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3" (UID: "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 05:04:13.994339 kubelet[2784]: I1104 05:04:13.994295 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-hostproc" (OuterVolumeSpecName: "hostproc") pod "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3" (UID: "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 05:04:13.994339 kubelet[2784]: I1104 05:04:13.994327 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3" (UID: "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 05:04:13.995453 kubelet[2784]: I1104 05:04:13.995402 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-cni-path" (OuterVolumeSpecName: "cni-path") pod "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3" (UID: "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 05:04:13.995925 kubelet[2784]: I1104 05:04:13.995507 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3" (UID: "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 05:04:13.995925 kubelet[2784]: I1104 05:04:13.995551 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3" (UID: "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 05:04:13.995925 kubelet[2784]: I1104 05:04:13.995576 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3" (UID: "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 05:04:13.995925 kubelet[2784]: I1104 05:04:13.995656 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/555c0676-a74e-4bdd-92e4-6446cd997796-kube-api-access-rkjxh" (OuterVolumeSpecName: "kube-api-access-rkjxh") pod "555c0676-a74e-4bdd-92e4-6446cd997796" (UID: "555c0676-a74e-4bdd-92e4-6446cd997796"). InnerVolumeSpecName "kube-api-access-rkjxh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 05:04:13.998411 kubelet[2784]: I1104 05:04:13.998354 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-kube-api-access-g9rl7" (OuterVolumeSpecName: "kube-api-access-g9rl7") pod "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3" (UID: "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3"). InnerVolumeSpecName "kube-api-access-g9rl7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 05:04:14.002829 kubelet[2784]: I1104 05:04:14.002795 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/555c0676-a74e-4bdd-92e4-6446cd997796-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "555c0676-a74e-4bdd-92e4-6446cd997796" (UID: "555c0676-a74e-4bdd-92e4-6446cd997796"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 05:04:14.004104 kubelet[2784]: I1104 05:04:14.004072 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3" (UID: "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 05:04:14.004478 kubelet[2784]: I1104 05:04:14.004418 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3" (UID: "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 4 05:04:14.004970 kubelet[2784]: I1104 05:04:14.004942 2784 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3" (UID: "eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 05:04:14.093435 kubelet[2784]: I1104 05:04:14.093349 2784 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g9rl7\" (UniqueName: \"kubernetes.io/projected/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-kube-api-access-g9rl7\") on node \"172-237-150-130\" DevicePath \"\"" Nov 4 05:04:14.093435 kubelet[2784]: I1104 05:04:14.093411 2784 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-hubble-tls\") on node \"172-237-150-130\" DevicePath \"\"" Nov 4 05:04:14.093435 kubelet[2784]: I1104 05:04:14.093426 2784 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-hostproc\") on node \"172-237-150-130\" DevicePath \"\"" Nov 4 05:04:14.093435 kubelet[2784]: I1104 05:04:14.093437 2784 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-cilium-config-path\") on node \"172-237-150-130\" DevicePath \"\"" Nov 4 05:04:14.093435 kubelet[2784]: I1104 05:04:14.093452 2784 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-cni-path\") on node \"172-237-150-130\" DevicePath \"\"" Nov 4 05:04:14.093435 kubelet[2784]: I1104 05:04:14.093464 2784 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/555c0676-a74e-4bdd-92e4-6446cd997796-cilium-config-path\") on node \"172-237-150-130\" DevicePath \"\"" Nov 4 05:04:14.093812 kubelet[2784]: I1104 05:04:14.093475 2784 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-bpf-maps\") on node \"172-237-150-130\" DevicePath \"\"" Nov 4 05:04:14.093812 kubelet[2784]: I1104 05:04:14.093484 2784 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-lib-modules\") on node \"172-237-150-130\" DevicePath \"\"" Nov 4 05:04:14.093812 kubelet[2784]: I1104 05:04:14.093493 2784 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-host-proc-sys-net\") on node \"172-237-150-130\" DevicePath \"\"" Nov 4 05:04:14.093812 kubelet[2784]: I1104 05:04:14.093502 2784 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-xtables-lock\") on node \"172-237-150-130\" DevicePath \"\"" Nov 4 05:04:14.093812 kubelet[2784]: I1104 05:04:14.093513 2784 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-etc-cni-netd\") on node \"172-237-150-130\" DevicePath \"\"" Nov 4 05:04:14.093812 kubelet[2784]: I1104 05:04:14.093524 2784 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rkjxh\" (UniqueName: \"kubernetes.io/projected/555c0676-a74e-4bdd-92e4-6446cd997796-kube-api-access-rkjxh\") on node \"172-237-150-130\" DevicePath \"\"" Nov 4 05:04:14.093812 kubelet[2784]: I1104 05:04:14.093534 2784 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-cilium-cgroup\") on node \"172-237-150-130\" DevicePath \"\"" Nov 4 05:04:14.093812 kubelet[2784]: I1104 05:04:14.093545 2784 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-clustermesh-secrets\") on node \"172-237-150-130\" DevicePath \"\"" Nov 4 05:04:14.094107 kubelet[2784]: I1104 05:04:14.093560 2784 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3-host-proc-sys-kernel\") on node \"172-237-150-130\" DevicePath \"\"" Nov 4 05:04:14.721648 kubelet[2784]: I1104 05:04:14.721336 2784 scope.go:117] "RemoveContainer" containerID="b9fdda31c68c48a837bfbe53b0e378e189a799793bb105bac9a9a962e103b1c3" Nov 4 05:04:14.726893 containerd[1605]: time="2025-11-04T05:04:14.726853083Z" level=info msg="RemoveContainer for \"b9fdda31c68c48a837bfbe53b0e378e189a799793bb105bac9a9a962e103b1c3\"" Nov 4 05:04:14.729448 systemd[1]: Removed slice kubepods-besteffort-pod555c0676_a74e_4bdd_92e4_6446cd997796.slice - libcontainer container kubepods-besteffort-pod555c0676_a74e_4bdd_92e4_6446cd997796.slice. Nov 4 05:04:14.735365 containerd[1605]: time="2025-11-04T05:04:14.735219467Z" level=info msg="RemoveContainer for \"b9fdda31c68c48a837bfbe53b0e378e189a799793bb105bac9a9a962e103b1c3\" returns successfully" Nov 4 05:04:14.737472 kubelet[2784]: I1104 05:04:14.736630 2784 scope.go:117] "RemoveContainer" containerID="b9fdda31c68c48a837bfbe53b0e378e189a799793bb105bac9a9a962e103b1c3" Nov 4 05:04:14.737137 systemd[1]: Removed slice kubepods-burstable-podeb6a131f_8eeb_4fd4_9ac6_2c79a8b74fd3.slice - libcontainer container kubepods-burstable-podeb6a131f_8eeb_4fd4_9ac6_2c79a8b74fd3.slice. Nov 4 05:04:14.737291 systemd[1]: kubepods-burstable-podeb6a131f_8eeb_4fd4_9ac6_2c79a8b74fd3.slice: Consumed 6.529s CPU time, 123.7M memory peak, 136K read from disk, 13.3M written to disk. Nov 4 05:04:14.738488 containerd[1605]: time="2025-11-04T05:04:14.738452173Z" level=error msg="ContainerStatus for \"b9fdda31c68c48a837bfbe53b0e378e189a799793bb105bac9a9a962e103b1c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9fdda31c68c48a837bfbe53b0e378e189a799793bb105bac9a9a962e103b1c3\": not found" Nov 4 05:04:14.739029 kubelet[2784]: E1104 05:04:14.738595 2784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9fdda31c68c48a837bfbe53b0e378e189a799793bb105bac9a9a962e103b1c3\": not found" containerID="b9fdda31c68c48a837bfbe53b0e378e189a799793bb105bac9a9a962e103b1c3" Nov 4 05:04:14.739029 kubelet[2784]: I1104 05:04:14.738642 2784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b9fdda31c68c48a837bfbe53b0e378e189a799793bb105bac9a9a962e103b1c3"} err="failed to get container status \"b9fdda31c68c48a837bfbe53b0e378e189a799793bb105bac9a9a962e103b1c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9fdda31c68c48a837bfbe53b0e378e189a799793bb105bac9a9a962e103b1c3\": not found" Nov 4 05:04:14.739029 kubelet[2784]: I1104 05:04:14.738692 2784 scope.go:117] "RemoveContainer" containerID="883c20c5bccd3410d0623131382683d519cc38efe02470041a547251448062b3" Nov 4 05:04:14.754212 containerd[1605]: time="2025-11-04T05:04:14.750382243Z" level=info msg="RemoveContainer for \"883c20c5bccd3410d0623131382683d519cc38efe02470041a547251448062b3\"" Nov 4 05:04:14.756624 containerd[1605]: time="2025-11-04T05:04:14.756354333Z" level=info msg="RemoveContainer for \"883c20c5bccd3410d0623131382683d519cc38efe02470041a547251448062b3\" returns successfully" Nov 4 05:04:14.756810 kubelet[2784]: I1104 05:04:14.756785 2784 scope.go:117] "RemoveContainer" containerID="c61f11af4b0b45c39ebf64b8a253b80e07c87c172b79c293765b003a9bb85bdd" Nov 4 05:04:14.758377 containerd[1605]: time="2025-11-04T05:04:14.758332607Z" level=info msg="RemoveContainer for \"c61f11af4b0b45c39ebf64b8a253b80e07c87c172b79c293765b003a9bb85bdd\"" Nov 4 05:04:14.762429 containerd[1605]: time="2025-11-04T05:04:14.762351753Z" level=info msg="RemoveContainer for \"c61f11af4b0b45c39ebf64b8a253b80e07c87c172b79c293765b003a9bb85bdd\" returns successfully" Nov 4 05:04:14.762656 kubelet[2784]: I1104 05:04:14.762633 2784 scope.go:117] "RemoveContainer" containerID="4ee993ad71746a304c9deb67ad5bc6afc85edf2d44dd6317d7c9da25279f99c1" Nov 4 05:04:14.764504 containerd[1605]: time="2025-11-04T05:04:14.764482337Z" level=info msg="RemoveContainer for \"4ee993ad71746a304c9deb67ad5bc6afc85edf2d44dd6317d7c9da25279f99c1\"" Nov 4 05:04:14.774486 containerd[1605]: time="2025-11-04T05:04:14.774447683Z" level=info msg="RemoveContainer for \"4ee993ad71746a304c9deb67ad5bc6afc85edf2d44dd6317d7c9da25279f99c1\" returns successfully" Nov 4 05:04:14.775558 kubelet[2784]: I1104 05:04:14.775530 2784 scope.go:117] "RemoveContainer" containerID="b3779306c3f9dbc7a9f9092829c1b3e9215c9bb260a9fb31786aa63eaedc631c" Nov 4 05:04:14.780453 containerd[1605]: time="2025-11-04T05:04:14.780408074Z" level=info msg="RemoveContainer for \"b3779306c3f9dbc7a9f9092829c1b3e9215c9bb260a9fb31786aa63eaedc631c\"" Nov 4 05:04:14.785480 containerd[1605]: time="2025-11-04T05:04:14.785436032Z" level=info msg="RemoveContainer for \"b3779306c3f9dbc7a9f9092829c1b3e9215c9bb260a9fb31786aa63eaedc631c\" returns successfully" Nov 4 05:04:14.785744 kubelet[2784]: I1104 05:04:14.785715 2784 scope.go:117] "RemoveContainer" containerID="2669eb3de51afd4a7752c9fad0e497123ee0e1d4096dbe15337086563217b578" Nov 4 05:04:14.790201 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-864e54aa58623fef61eb6f27accb4a6ee992419b082d178ec658b8746747bf62-shm.mount: Deactivated successfully. Nov 4 05:04:14.790353 systemd[1]: var-lib-kubelet-pods-555c0676\x2da74e\x2d4bdd\x2d92e4\x2d6446cd997796-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drkjxh.mount: Deactivated successfully. Nov 4 05:04:14.790431 systemd[1]: var-lib-kubelet-pods-eb6a131f\x2d8eeb\x2d4fd4\x2d9ac6\x2d2c79a8b74fd3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg9rl7.mount: Deactivated successfully. Nov 4 05:04:14.790513 systemd[1]: var-lib-kubelet-pods-eb6a131f\x2d8eeb\x2d4fd4\x2d9ac6\x2d2c79a8b74fd3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 4 05:04:14.790592 systemd[1]: var-lib-kubelet-pods-eb6a131f\x2d8eeb\x2d4fd4\x2d9ac6\x2d2c79a8b74fd3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 4 05:04:14.795266 containerd[1605]: time="2025-11-04T05:04:14.794946058Z" level=info msg="RemoveContainer for \"2669eb3de51afd4a7752c9fad0e497123ee0e1d4096dbe15337086563217b578\"" Nov 4 05:04:14.799655 containerd[1605]: time="2025-11-04T05:04:14.799623316Z" level=info msg="RemoveContainer for \"2669eb3de51afd4a7752c9fad0e497123ee0e1d4096dbe15337086563217b578\" returns successfully" Nov 4 05:04:14.800282 containerd[1605]: time="2025-11-04T05:04:14.799934556Z" level=error msg="ContainerStatus for \"883c20c5bccd3410d0623131382683d519cc38efe02470041a547251448062b3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"883c20c5bccd3410d0623131382683d519cc38efe02470041a547251448062b3\": not found" Nov 4 05:04:14.800376 kubelet[2784]: I1104 05:04:14.799805 2784 scope.go:117] "RemoveContainer" containerID="883c20c5bccd3410d0623131382683d519cc38efe02470041a547251448062b3" Nov 4 05:04:14.800424 kubelet[2784]: E1104 05:04:14.800044 2784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"883c20c5bccd3410d0623131382683d519cc38efe02470041a547251448062b3\": not found" containerID="883c20c5bccd3410d0623131382683d519cc38efe02470041a547251448062b3" Nov 4 05:04:14.800424 kubelet[2784]: I1104 05:04:14.800405 2784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"883c20c5bccd3410d0623131382683d519cc38efe02470041a547251448062b3"} err="failed to get container status \"883c20c5bccd3410d0623131382683d519cc38efe02470041a547251448062b3\": rpc error: code = NotFound desc = an error occurred when try to find container \"883c20c5bccd3410d0623131382683d519cc38efe02470041a547251448062b3\": not found" Nov 4 05:04:14.800424 kubelet[2784]: I1104 05:04:14.800421 2784 scope.go:117] "RemoveContainer" containerID="c61f11af4b0b45c39ebf64b8a253b80e07c87c172b79c293765b003a9bb85bdd" Nov 4 05:04:14.800972 containerd[1605]: time="2025-11-04T05:04:14.800702628Z" level=error msg="ContainerStatus for \"c61f11af4b0b45c39ebf64b8a253b80e07c87c172b79c293765b003a9bb85bdd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c61f11af4b0b45c39ebf64b8a253b80e07c87c172b79c293765b003a9bb85bdd\": not found" Nov 4 05:04:14.801055 kubelet[2784]: E1104 05:04:14.801001 2784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c61f11af4b0b45c39ebf64b8a253b80e07c87c172b79c293765b003a9bb85bdd\": not found" containerID="c61f11af4b0b45c39ebf64b8a253b80e07c87c172b79c293765b003a9bb85bdd" Nov 4 05:04:14.801055 kubelet[2784]: I1104 05:04:14.801018 2784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c61f11af4b0b45c39ebf64b8a253b80e07c87c172b79c293765b003a9bb85bdd"} err="failed to get container status \"c61f11af4b0b45c39ebf64b8a253b80e07c87c172b79c293765b003a9bb85bdd\": rpc error: code = NotFound desc = an error occurred when try to find container \"c61f11af4b0b45c39ebf64b8a253b80e07c87c172b79c293765b003a9bb85bdd\": not found" Nov 4 05:04:14.801055 kubelet[2784]: I1104 05:04:14.801030 2784 scope.go:117] "RemoveContainer" containerID="4ee993ad71746a304c9deb67ad5bc6afc85edf2d44dd6317d7c9da25279f99c1" Nov 4 05:04:14.801678 containerd[1605]: time="2025-11-04T05:04:14.801638709Z" level=error msg="ContainerStatus for \"4ee993ad71746a304c9deb67ad5bc6afc85edf2d44dd6317d7c9da25279f99c1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ee993ad71746a304c9deb67ad5bc6afc85edf2d44dd6317d7c9da25279f99c1\": not found" Nov 4 05:04:14.802037 kubelet[2784]: E1104 05:04:14.801827 2784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4ee993ad71746a304c9deb67ad5bc6afc85edf2d44dd6317d7c9da25279f99c1\": not found" containerID="4ee993ad71746a304c9deb67ad5bc6afc85edf2d44dd6317d7c9da25279f99c1" Nov 4 05:04:14.802037 kubelet[2784]: I1104 05:04:14.801844 2784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4ee993ad71746a304c9deb67ad5bc6afc85edf2d44dd6317d7c9da25279f99c1"} err="failed to get container status \"4ee993ad71746a304c9deb67ad5bc6afc85edf2d44dd6317d7c9da25279f99c1\": rpc error: code = NotFound desc = an error occurred when try to find container \"4ee993ad71746a304c9deb67ad5bc6afc85edf2d44dd6317d7c9da25279f99c1\": not found" Nov 4 05:04:14.802037 kubelet[2784]: I1104 05:04:14.801855 2784 scope.go:117] "RemoveContainer" containerID="b3779306c3f9dbc7a9f9092829c1b3e9215c9bb260a9fb31786aa63eaedc631c" Nov 4 05:04:14.802554 containerd[1605]: time="2025-11-04T05:04:14.802475971Z" level=error msg="ContainerStatus for \"b3779306c3f9dbc7a9f9092829c1b3e9215c9bb260a9fb31786aa63eaedc631c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b3779306c3f9dbc7a9f9092829c1b3e9215c9bb260a9fb31786aa63eaedc631c\": not found" Nov 4 05:04:14.802661 kubelet[2784]: E1104 05:04:14.802599 2784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b3779306c3f9dbc7a9f9092829c1b3e9215c9bb260a9fb31786aa63eaedc631c\": not found" containerID="b3779306c3f9dbc7a9f9092829c1b3e9215c9bb260a9fb31786aa63eaedc631c" Nov 4 05:04:14.802661 kubelet[2784]: I1104 05:04:14.802643 2784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b3779306c3f9dbc7a9f9092829c1b3e9215c9bb260a9fb31786aa63eaedc631c"} err="failed to get container status \"b3779306c3f9dbc7a9f9092829c1b3e9215c9bb260a9fb31786aa63eaedc631c\": rpc error: code = NotFound desc = an error occurred when try to find container \"b3779306c3f9dbc7a9f9092829c1b3e9215c9bb260a9fb31786aa63eaedc631c\": not found" Nov 4 05:04:14.802661 kubelet[2784]: I1104 05:04:14.802657 2784 scope.go:117] "RemoveContainer" containerID="2669eb3de51afd4a7752c9fad0e497123ee0e1d4096dbe15337086563217b578" Nov 4 05:04:14.802978 containerd[1605]: time="2025-11-04T05:04:14.802933761Z" level=error msg="ContainerStatus for \"2669eb3de51afd4a7752c9fad0e497123ee0e1d4096dbe15337086563217b578\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2669eb3de51afd4a7752c9fad0e497123ee0e1d4096dbe15337086563217b578\": not found" Nov 4 05:04:14.803438 kubelet[2784]: E1104 05:04:14.803361 2784 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2669eb3de51afd4a7752c9fad0e497123ee0e1d4096dbe15337086563217b578\": not found" containerID="2669eb3de51afd4a7752c9fad0e497123ee0e1d4096dbe15337086563217b578" Nov 4 05:04:14.803438 kubelet[2784]: I1104 05:04:14.803389 2784 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2669eb3de51afd4a7752c9fad0e497123ee0e1d4096dbe15337086563217b578"} err="failed to get container status \"2669eb3de51afd4a7752c9fad0e497123ee0e1d4096dbe15337086563217b578\": rpc error: code = NotFound desc = an error occurred when try to find container \"2669eb3de51afd4a7752c9fad0e497123ee0e1d4096dbe15337086563217b578\": not found" Nov 4 05:04:15.260138 kubelet[2784]: I1104 05:04:15.260070 2784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="555c0676-a74e-4bdd-92e4-6446cd997796" path="/var/lib/kubelet/pods/555c0676-a74e-4bdd-92e4-6446cd997796/volumes" Nov 4 05:04:15.261147 kubelet[2784]: I1104 05:04:15.261130 2784 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3" path="/var/lib/kubelet/pods/eb6a131f-8eeb-4fd4-9ac6-2c79a8b74fd3/volumes" Nov 4 05:04:15.682444 sshd[4330]: Connection closed by 139.178.89.65 port 35288 Nov 4 05:04:15.683355 sshd-session[4327]: pam_unix(sshd:session): session closed for user core Nov 4 05:04:15.689425 systemd[1]: sshd@20-172.237.150.130:22-139.178.89.65:35288.service: Deactivated successfully. Nov 4 05:04:15.691978 systemd[1]: session-21.scope: Deactivated successfully. Nov 4 05:04:15.693132 systemd-logind[1584]: Session 21 logged out. Waiting for processes to exit. Nov 4 05:04:15.694779 systemd-logind[1584]: Removed session 21. Nov 4 05:04:15.747140 systemd[1]: Started sshd@21-172.237.150.130:22-139.178.89.65:35294.service - OpenSSH per-connection server daemon (139.178.89.65:35294). Nov 4 05:04:16.047619 sshd[4482]: Accepted publickey for core from 139.178.89.65 port 35294 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:04:16.049810 sshd-session[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:04:16.056721 systemd-logind[1584]: New session 22 of user core. Nov 4 05:04:16.063461 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 4 05:04:16.628001 systemd[1]: Created slice kubepods-burstable-pod94fcf684_0a62_4c39_8375_7014918c21f9.slice - libcontainer container kubepods-burstable-pod94fcf684_0a62_4c39_8375_7014918c21f9.slice. Nov 4 05:04:16.629403 kubelet[2784]: E1104 05:04:16.628295 2784 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:172-237-150-130\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-237-150-130' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-clustermesh\"" type="*v1.Secret" Nov 4 05:04:16.629403 kubelet[2784]: E1104 05:04:16.628350 2784 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:172-237-150-130\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-237-150-130' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"hubble-server-certs\"" type="*v1.Secret" Nov 4 05:04:16.629403 kubelet[2784]: E1104 05:04:16.628389 2784 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:172-237-150-130\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-237-150-130' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-config\"" type="*v1.ConfigMap" Nov 4 05:04:16.629403 kubelet[2784]: E1104 05:04:16.628421 2784 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:172-237-150-130\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-237-150-130' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-ipsec-keys\"" type="*v1.Secret" Nov 4 05:04:16.632216 kubelet[2784]: E1104 05:04:16.627303 2784 status_manager.go:1018] "Failed to get status for pod" err="pods \"cilium-t6hd9\" is forbidden: User \"system:node:172-237-150-130\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-237-150-130' and this object" podUID="94fcf684-0a62-4c39-8375-7014918c21f9" pod="kube-system/cilium-t6hd9" Nov 4 05:04:16.656281 sshd[4485]: Connection closed by 139.178.89.65 port 35294 Nov 4 05:04:16.657124 sshd-session[4482]: pam_unix(sshd:session): session closed for user core Nov 4 05:04:16.665859 systemd[1]: sshd@21-172.237.150.130:22-139.178.89.65:35294.service: Deactivated successfully. Nov 4 05:04:16.669488 systemd[1]: session-22.scope: Deactivated successfully. Nov 4 05:04:16.672765 systemd-logind[1584]: Session 22 logged out. Waiting for processes to exit. Nov 4 05:04:16.676047 systemd-logind[1584]: Removed session 22. Nov 4 05:04:16.711535 kubelet[2784]: I1104 05:04:16.711466 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/94fcf684-0a62-4c39-8375-7014918c21f9-cilium-run\") pod \"cilium-t6hd9\" (UID: \"94fcf684-0a62-4c39-8375-7014918c21f9\") " pod="kube-system/cilium-t6hd9" Nov 4 05:04:16.711535 kubelet[2784]: I1104 05:04:16.711531 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94fcf684-0a62-4c39-8375-7014918c21f9-cilium-config-path\") pod \"cilium-t6hd9\" (UID: \"94fcf684-0a62-4c39-8375-7014918c21f9\") " pod="kube-system/cilium-t6hd9" Nov 4 05:04:16.711648 kubelet[2784]: I1104 05:04:16.711562 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/94fcf684-0a62-4c39-8375-7014918c21f9-host-proc-sys-kernel\") pod \"cilium-t6hd9\" (UID: \"94fcf684-0a62-4c39-8375-7014918c21f9\") " pod="kube-system/cilium-t6hd9" Nov 4 05:04:16.711648 kubelet[2784]: I1104 05:04:16.711591 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94fcf684-0a62-4c39-8375-7014918c21f9-xtables-lock\") pod \"cilium-t6hd9\" (UID: \"94fcf684-0a62-4c39-8375-7014918c21f9\") " pod="kube-system/cilium-t6hd9" Nov 4 05:04:16.711648 kubelet[2784]: I1104 05:04:16.711617 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58mqp\" (UniqueName: \"kubernetes.io/projected/94fcf684-0a62-4c39-8375-7014918c21f9-kube-api-access-58mqp\") pod \"cilium-t6hd9\" (UID: \"94fcf684-0a62-4c39-8375-7014918c21f9\") " pod="kube-system/cilium-t6hd9" Nov 4 05:04:16.711648 kubelet[2784]: I1104 05:04:16.711645 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/94fcf684-0a62-4c39-8375-7014918c21f9-cilium-cgroup\") pod \"cilium-t6hd9\" (UID: \"94fcf684-0a62-4c39-8375-7014918c21f9\") " pod="kube-system/cilium-t6hd9" Nov 4 05:04:16.711743 kubelet[2784]: I1104 05:04:16.711667 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/94fcf684-0a62-4c39-8375-7014918c21f9-hubble-tls\") pod \"cilium-t6hd9\" (UID: \"94fcf684-0a62-4c39-8375-7014918c21f9\") " pod="kube-system/cilium-t6hd9" Nov 4 05:04:16.711743 kubelet[2784]: I1104 05:04:16.711692 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/94fcf684-0a62-4c39-8375-7014918c21f9-clustermesh-secrets\") pod \"cilium-t6hd9\" (UID: \"94fcf684-0a62-4c39-8375-7014918c21f9\") " pod="kube-system/cilium-t6hd9" Nov 4 05:04:16.711743 kubelet[2784]: I1104 05:04:16.711714 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/94fcf684-0a62-4c39-8375-7014918c21f9-bpf-maps\") pod \"cilium-t6hd9\" (UID: \"94fcf684-0a62-4c39-8375-7014918c21f9\") " pod="kube-system/cilium-t6hd9" Nov 4 05:04:16.711743 kubelet[2784]: I1104 05:04:16.711737 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/94fcf684-0a62-4c39-8375-7014918c21f9-cni-path\") pod \"cilium-t6hd9\" (UID: \"94fcf684-0a62-4c39-8375-7014918c21f9\") " pod="kube-system/cilium-t6hd9" Nov 4 05:04:16.711830 kubelet[2784]: I1104 05:04:16.711761 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/94fcf684-0a62-4c39-8375-7014918c21f9-cilium-ipsec-secrets\") pod \"cilium-t6hd9\" (UID: \"94fcf684-0a62-4c39-8375-7014918c21f9\") " pod="kube-system/cilium-t6hd9" Nov 4 05:04:16.711830 kubelet[2784]: I1104 05:04:16.711785 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/94fcf684-0a62-4c39-8375-7014918c21f9-hostproc\") pod \"cilium-t6hd9\" (UID: \"94fcf684-0a62-4c39-8375-7014918c21f9\") " pod="kube-system/cilium-t6hd9" Nov 4 05:04:16.711830 kubelet[2784]: I1104 05:04:16.711808 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/94fcf684-0a62-4c39-8375-7014918c21f9-host-proc-sys-net\") pod \"cilium-t6hd9\" (UID: \"94fcf684-0a62-4c39-8375-7014918c21f9\") " pod="kube-system/cilium-t6hd9" Nov 4 05:04:16.711894 kubelet[2784]: I1104 05:04:16.711836 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94fcf684-0a62-4c39-8375-7014918c21f9-etc-cni-netd\") pod \"cilium-t6hd9\" (UID: \"94fcf684-0a62-4c39-8375-7014918c21f9\") " pod="kube-system/cilium-t6hd9" Nov 4 05:04:16.711894 kubelet[2784]: I1104 05:04:16.711861 2784 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94fcf684-0a62-4c39-8375-7014918c21f9-lib-modules\") pod \"cilium-t6hd9\" (UID: \"94fcf684-0a62-4c39-8375-7014918c21f9\") " pod="kube-system/cilium-t6hd9" Nov 4 05:04:16.725191 systemd[1]: Started sshd@22-172.237.150.130:22-139.178.89.65:51812.service - OpenSSH per-connection server daemon (139.178.89.65:51812). Nov 4 05:04:17.025894 sshd[4496]: Accepted publickey for core from 139.178.89.65 port 51812 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:04:17.027670 sshd-session[4496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:04:17.035079 systemd-logind[1584]: New session 23 of user core. Nov 4 05:04:17.041402 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 4 05:04:17.179355 sshd[4500]: Connection closed by 139.178.89.65 port 51812 Nov 4 05:04:17.180583 sshd-session[4496]: pam_unix(sshd:session): session closed for user core Nov 4 05:04:17.186594 systemd[1]: sshd@22-172.237.150.130:22-139.178.89.65:51812.service: Deactivated successfully. Nov 4 05:04:17.189800 systemd[1]: session-23.scope: Deactivated successfully. Nov 4 05:04:17.191490 systemd-logind[1584]: Session 23 logged out. Waiting for processes to exit. Nov 4 05:04:17.193088 systemd-logind[1584]: Removed session 23. Nov 4 05:04:17.243477 systemd[1]: Started sshd@23-172.237.150.130:22-139.178.89.65:51820.service - OpenSSH per-connection server daemon (139.178.89.65:51820). Nov 4 05:04:17.369925 kubelet[2784]: E1104 05:04:17.369785 2784 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 4 05:04:17.551598 sshd[4507]: Accepted publickey for core from 139.178.89.65 port 51820 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:04:17.553193 sshd-session[4507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:04:17.558770 systemd-logind[1584]: New session 24 of user core. Nov 4 05:04:17.564382 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 4 05:04:17.814288 kubelet[2784]: E1104 05:04:17.814206 2784 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Nov 4 05:04:17.814834 kubelet[2784]: E1104 05:04:17.814367 2784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94fcf684-0a62-4c39-8375-7014918c21f9-clustermesh-secrets podName:94fcf684-0a62-4c39-8375-7014918c21f9 nodeName:}" failed. No retries permitted until 2025-11-04 05:04:18.31434239 +0000 UTC m=+171.204678147 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/94fcf684-0a62-4c39-8375-7014918c21f9-clustermesh-secrets") pod "cilium-t6hd9" (UID: "94fcf684-0a62-4c39-8375-7014918c21f9") : failed to sync secret cache: timed out waiting for the condition Nov 4 05:04:17.814834 kubelet[2784]: E1104 05:04:17.814206 2784 projected.go:266] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Nov 4 05:04:17.814834 kubelet[2784]: E1104 05:04:17.814705 2784 projected.go:196] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-t6hd9: failed to sync secret cache: timed out waiting for the condition Nov 4 05:04:17.814834 kubelet[2784]: E1104 05:04:17.814757 2784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/94fcf684-0a62-4c39-8375-7014918c21f9-hubble-tls podName:94fcf684-0a62-4c39-8375-7014918c21f9 nodeName:}" failed. No retries permitted until 2025-11-04 05:04:18.314742211 +0000 UTC m=+171.205077968 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/94fcf684-0a62-4c39-8375-7014918c21f9-hubble-tls") pod "cilium-t6hd9" (UID: "94fcf684-0a62-4c39-8375-7014918c21f9") : failed to sync secret cache: timed out waiting for the condition Nov 4 05:04:18.437852 kubelet[2784]: E1104 05:04:18.437786 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:04:18.438388 containerd[1605]: time="2025-11-04T05:04:18.438345756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t6hd9,Uid:94fcf684-0a62-4c39-8375-7014918c21f9,Namespace:kube-system,Attempt:0,}" Nov 4 05:04:18.456392 containerd[1605]: time="2025-11-04T05:04:18.456300119Z" level=info msg="connecting to shim a711802af50deac99135a8add4d362e8905a2536ff0bbe0d1f1de6e45ba2122c" address="unix:///run/containerd/s/668b80e6efea9839ec8acb71f7a22e0f3a60cb79b5091fe977bcbc52d85afc06" namespace=k8s.io protocol=ttrpc version=3 Nov 4 05:04:18.493365 systemd[1]: Started cri-containerd-a711802af50deac99135a8add4d362e8905a2536ff0bbe0d1f1de6e45ba2122c.scope - libcontainer container a711802af50deac99135a8add4d362e8905a2536ff0bbe0d1f1de6e45ba2122c. Nov 4 05:04:18.529795 containerd[1605]: time="2025-11-04T05:04:18.529741915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t6hd9,Uid:94fcf684-0a62-4c39-8375-7014918c21f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"a711802af50deac99135a8add4d362e8905a2536ff0bbe0d1f1de6e45ba2122c\"" Nov 4 05:04:18.532255 kubelet[2784]: E1104 05:04:18.531405 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:04:18.538450 containerd[1605]: time="2025-11-04T05:04:18.538413441Z" level=info msg="CreateContainer within sandbox \"a711802af50deac99135a8add4d362e8905a2536ff0bbe0d1f1de6e45ba2122c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 4 05:04:18.551439 containerd[1605]: time="2025-11-04T05:04:18.551397345Z" level=info msg="Container f9c67e069fa7a7b3220f02fe77fff8a188a8037a8dad34914acb2f502534f7b7: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:04:18.558928 containerd[1605]: time="2025-11-04T05:04:18.558894659Z" level=info msg="CreateContainer within sandbox \"a711802af50deac99135a8add4d362e8905a2536ff0bbe0d1f1de6e45ba2122c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f9c67e069fa7a7b3220f02fe77fff8a188a8037a8dad34914acb2f502534f7b7\"" Nov 4 05:04:18.561397 containerd[1605]: time="2025-11-04T05:04:18.561370653Z" level=info msg="StartContainer for \"f9c67e069fa7a7b3220f02fe77fff8a188a8037a8dad34914acb2f502534f7b7\"" Nov 4 05:04:18.562217 containerd[1605]: time="2025-11-04T05:04:18.562179785Z" level=info msg="connecting to shim f9c67e069fa7a7b3220f02fe77fff8a188a8037a8dad34914acb2f502534f7b7" address="unix:///run/containerd/s/668b80e6efea9839ec8acb71f7a22e0f3a60cb79b5091fe977bcbc52d85afc06" protocol=ttrpc version=3 Nov 4 05:04:18.585556 systemd[1]: Started cri-containerd-f9c67e069fa7a7b3220f02fe77fff8a188a8037a8dad34914acb2f502534f7b7.scope - libcontainer container f9c67e069fa7a7b3220f02fe77fff8a188a8037a8dad34914acb2f502534f7b7. Nov 4 05:04:18.624280 containerd[1605]: time="2025-11-04T05:04:18.623410098Z" level=info msg="StartContainer for \"f9c67e069fa7a7b3220f02fe77fff8a188a8037a8dad34914acb2f502534f7b7\" returns successfully" Nov 4 05:04:18.634303 systemd[1]: cri-containerd-f9c67e069fa7a7b3220f02fe77fff8a188a8037a8dad34914acb2f502534f7b7.scope: Deactivated successfully. Nov 4 05:04:18.638621 containerd[1605]: time="2025-11-04T05:04:18.638592656Z" level=info msg="received exit event container_id:\"f9c67e069fa7a7b3220f02fe77fff8a188a8037a8dad34914acb2f502534f7b7\" id:\"f9c67e069fa7a7b3220f02fe77fff8a188a8037a8dad34914acb2f502534f7b7\" pid:4578 exited_at:{seconds:1762232658 nanos:636700383}" Nov 4 05:04:18.754995 kubelet[2784]: E1104 05:04:18.754946 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:04:18.760372 containerd[1605]: time="2025-11-04T05:04:18.760326301Z" level=info msg="CreateContainer within sandbox \"a711802af50deac99135a8add4d362e8905a2536ff0bbe0d1f1de6e45ba2122c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 4 05:04:18.767129 containerd[1605]: time="2025-11-04T05:04:18.767096173Z" level=info msg="Container 85c6f9bd959f7db63e1f30b9ea9187bfce3a4524649bbf74503fed2f81bdbb4f: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:04:18.772955 containerd[1605]: time="2025-11-04T05:04:18.772913344Z" level=info msg="CreateContainer within sandbox \"a711802af50deac99135a8add4d362e8905a2536ff0bbe0d1f1de6e45ba2122c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"85c6f9bd959f7db63e1f30b9ea9187bfce3a4524649bbf74503fed2f81bdbb4f\"" Nov 4 05:04:18.775208 containerd[1605]: time="2025-11-04T05:04:18.774420867Z" level=info msg="StartContainer for \"85c6f9bd959f7db63e1f30b9ea9187bfce3a4524649bbf74503fed2f81bdbb4f\"" Nov 4 05:04:18.775208 containerd[1605]: time="2025-11-04T05:04:18.775079428Z" level=info msg="connecting to shim 85c6f9bd959f7db63e1f30b9ea9187bfce3a4524649bbf74503fed2f81bdbb4f" address="unix:///run/containerd/s/668b80e6efea9839ec8acb71f7a22e0f3a60cb79b5091fe977bcbc52d85afc06" protocol=ttrpc version=3 Nov 4 05:04:18.800381 systemd[1]: Started cri-containerd-85c6f9bd959f7db63e1f30b9ea9187bfce3a4524649bbf74503fed2f81bdbb4f.scope - libcontainer container 85c6f9bd959f7db63e1f30b9ea9187bfce3a4524649bbf74503fed2f81bdbb4f. Nov 4 05:04:18.833835 containerd[1605]: time="2025-11-04T05:04:18.833782956Z" level=info msg="StartContainer for \"85c6f9bd959f7db63e1f30b9ea9187bfce3a4524649bbf74503fed2f81bdbb4f\" returns successfully" Nov 4 05:04:18.840924 systemd[1]: cri-containerd-85c6f9bd959f7db63e1f30b9ea9187bfce3a4524649bbf74503fed2f81bdbb4f.scope: Deactivated successfully. Nov 4 05:04:18.841383 containerd[1605]: time="2025-11-04T05:04:18.841340500Z" level=info msg="received exit event container_id:\"85c6f9bd959f7db63e1f30b9ea9187bfce3a4524649bbf74503fed2f81bdbb4f\" id:\"85c6f9bd959f7db63e1f30b9ea9187bfce3a4524649bbf74503fed2f81bdbb4f\" pid:4623 exited_at:{seconds:1762232658 nanos:841042240}" Nov 4 05:04:19.327592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3105000625.mount: Deactivated successfully. Nov 4 05:04:19.758495 kubelet[2784]: E1104 05:04:19.758449 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:04:19.762497 containerd[1605]: time="2025-11-04T05:04:19.762461760Z" level=info msg="CreateContainer within sandbox \"a711802af50deac99135a8add4d362e8905a2536ff0bbe0d1f1de6e45ba2122c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 4 05:04:19.777956 containerd[1605]: time="2025-11-04T05:04:19.777863649Z" level=info msg="Container 4d989a396a023c2cb5f747d2fec03eac21b8a07cdcb310b64387fcd65ca7c4c2: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:04:19.793494 containerd[1605]: time="2025-11-04T05:04:19.792152346Z" level=info msg="CreateContainer within sandbox \"a711802af50deac99135a8add4d362e8905a2536ff0bbe0d1f1de6e45ba2122c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4d989a396a023c2cb5f747d2fec03eac21b8a07cdcb310b64387fcd65ca7c4c2\"" Nov 4 05:04:19.794271 containerd[1605]: time="2025-11-04T05:04:19.793864169Z" level=info msg="StartContainer for \"4d989a396a023c2cb5f747d2fec03eac21b8a07cdcb310b64387fcd65ca7c4c2\"" Nov 4 05:04:19.795017 containerd[1605]: time="2025-11-04T05:04:19.794987941Z" level=info msg="connecting to shim 4d989a396a023c2cb5f747d2fec03eac21b8a07cdcb310b64387fcd65ca7c4c2" address="unix:///run/containerd/s/668b80e6efea9839ec8acb71f7a22e0f3a60cb79b5091fe977bcbc52d85afc06" protocol=ttrpc version=3 Nov 4 05:04:19.829366 systemd[1]: Started cri-containerd-4d989a396a023c2cb5f747d2fec03eac21b8a07cdcb310b64387fcd65ca7c4c2.scope - libcontainer container 4d989a396a023c2cb5f747d2fec03eac21b8a07cdcb310b64387fcd65ca7c4c2. Nov 4 05:04:19.880069 containerd[1605]: time="2025-11-04T05:04:19.879969571Z" level=info msg="StartContainer for \"4d989a396a023c2cb5f747d2fec03eac21b8a07cdcb310b64387fcd65ca7c4c2\" returns successfully" Nov 4 05:04:19.880879 systemd[1]: cri-containerd-4d989a396a023c2cb5f747d2fec03eac21b8a07cdcb310b64387fcd65ca7c4c2.scope: Deactivated successfully. Nov 4 05:04:19.884551 containerd[1605]: time="2025-11-04T05:04:19.884177739Z" level=info msg="received exit event container_id:\"4d989a396a023c2cb5f747d2fec03eac21b8a07cdcb310b64387fcd65ca7c4c2\" id:\"4d989a396a023c2cb5f747d2fec03eac21b8a07cdcb310b64387fcd65ca7c4c2\" pid:4668 exited_at:{seconds:1762232659 nanos:883459758}" Nov 4 05:04:19.904679 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d989a396a023c2cb5f747d2fec03eac21b8a07cdcb310b64387fcd65ca7c4c2-rootfs.mount: Deactivated successfully. Nov 4 05:04:20.436290 kubelet[2784]: I1104 05:04:20.435358 2784 setters.go:543] "Node became not ready" node="172-237-150-130" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-04T05:04:20Z","lastTransitionTime":"2025-11-04T05:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 4 05:04:20.765992 kubelet[2784]: E1104 05:04:20.765294 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:04:20.775398 containerd[1605]: time="2025-11-04T05:04:20.775341566Z" level=info msg="CreateContainer within sandbox \"a711802af50deac99135a8add4d362e8905a2536ff0bbe0d1f1de6e45ba2122c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 4 05:04:20.792905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2482157285.mount: Deactivated successfully. Nov 4 05:04:20.796518 containerd[1605]: time="2025-11-04T05:04:20.796456586Z" level=info msg="Container 4016d19223d6b3e35e1bd9bb906b38bd30300352fa0dd8053d8033d5a11d31bb: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:04:20.801608 containerd[1605]: time="2025-11-04T05:04:20.801497206Z" level=info msg="CreateContainer within sandbox \"a711802af50deac99135a8add4d362e8905a2536ff0bbe0d1f1de6e45ba2122c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4016d19223d6b3e35e1bd9bb906b38bd30300352fa0dd8053d8033d5a11d31bb\"" Nov 4 05:04:20.803548 containerd[1605]: time="2025-11-04T05:04:20.802189707Z" level=info msg="StartContainer for \"4016d19223d6b3e35e1bd9bb906b38bd30300352fa0dd8053d8033d5a11d31bb\"" Nov 4 05:04:20.803548 containerd[1605]: time="2025-11-04T05:04:20.803004809Z" level=info msg="connecting to shim 4016d19223d6b3e35e1bd9bb906b38bd30300352fa0dd8053d8033d5a11d31bb" address="unix:///run/containerd/s/668b80e6efea9839ec8acb71f7a22e0f3a60cb79b5091fe977bcbc52d85afc06" protocol=ttrpc version=3 Nov 4 05:04:20.830451 systemd[1]: Started cri-containerd-4016d19223d6b3e35e1bd9bb906b38bd30300352fa0dd8053d8033d5a11d31bb.scope - libcontainer container 4016d19223d6b3e35e1bd9bb906b38bd30300352fa0dd8053d8033d5a11d31bb. Nov 4 05:04:20.865510 systemd[1]: cri-containerd-4016d19223d6b3e35e1bd9bb906b38bd30300352fa0dd8053d8033d5a11d31bb.scope: Deactivated successfully. Nov 4 05:04:20.867402 containerd[1605]: time="2025-11-04T05:04:20.867342812Z" level=info msg="received exit event container_id:\"4016d19223d6b3e35e1bd9bb906b38bd30300352fa0dd8053d8033d5a11d31bb\" id:\"4016d19223d6b3e35e1bd9bb906b38bd30300352fa0dd8053d8033d5a11d31bb\" pid:4707 exited_at:{seconds:1762232660 nanos:866155390}" Nov 4 05:04:20.870384 containerd[1605]: time="2025-11-04T05:04:20.870328468Z" level=info msg="StartContainer for \"4016d19223d6b3e35e1bd9bb906b38bd30300352fa0dd8053d8033d5a11d31bb\" returns successfully" Nov 4 05:04:20.896204 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4016d19223d6b3e35e1bd9bb906b38bd30300352fa0dd8053d8033d5a11d31bb-rootfs.mount: Deactivated successfully. Nov 4 05:04:21.770956 kubelet[2784]: E1104 05:04:21.770910 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:04:21.776253 containerd[1605]: time="2025-11-04T05:04:21.775856905Z" level=info msg="CreateContainer within sandbox \"a711802af50deac99135a8add4d362e8905a2536ff0bbe0d1f1de6e45ba2122c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 4 05:04:21.790248 containerd[1605]: time="2025-11-04T05:04:21.789632292Z" level=info msg="Container c039fe7d1ceb92f9ac3867a4b2a8500fe5b8763140dbc25055d1a5666b09f674: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:04:21.793971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3909475463.mount: Deactivated successfully. Nov 4 05:04:21.799249 containerd[1605]: time="2025-11-04T05:04:21.798589479Z" level=info msg="CreateContainer within sandbox \"a711802af50deac99135a8add4d362e8905a2536ff0bbe0d1f1de6e45ba2122c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c039fe7d1ceb92f9ac3867a4b2a8500fe5b8763140dbc25055d1a5666b09f674\"" Nov 4 05:04:21.801609 containerd[1605]: time="2025-11-04T05:04:21.801585125Z" level=info msg="StartContainer for \"c039fe7d1ceb92f9ac3867a4b2a8500fe5b8763140dbc25055d1a5666b09f674\"" Nov 4 05:04:21.804150 containerd[1605]: time="2025-11-04T05:04:21.804014360Z" level=info msg="connecting to shim c039fe7d1ceb92f9ac3867a4b2a8500fe5b8763140dbc25055d1a5666b09f674" address="unix:///run/containerd/s/668b80e6efea9839ec8acb71f7a22e0f3a60cb79b5091fe977bcbc52d85afc06" protocol=ttrpc version=3 Nov 4 05:04:21.830469 systemd[1]: Started cri-containerd-c039fe7d1ceb92f9ac3867a4b2a8500fe5b8763140dbc25055d1a5666b09f674.scope - libcontainer container c039fe7d1ceb92f9ac3867a4b2a8500fe5b8763140dbc25055d1a5666b09f674. Nov 4 05:04:21.885833 containerd[1605]: time="2025-11-04T05:04:21.885800280Z" level=info msg="StartContainer for \"c039fe7d1ceb92f9ac3867a4b2a8500fe5b8763140dbc25055d1a5666b09f674\" returns successfully" Nov 4 05:04:22.403264 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Nov 4 05:04:22.778257 kubelet[2784]: E1104 05:04:22.778033 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:04:22.797211 kubelet[2784]: I1104 05:04:22.797151 2784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-t6hd9" podStartSLOduration=6.797138739 podStartE2EDuration="6.797138739s" podCreationTimestamp="2025-11-04 05:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 05:04:22.796453068 +0000 UTC m=+175.686788825" watchObservedRunningTime="2025-11-04 05:04:22.797138739 +0000 UTC m=+175.687474496" Nov 4 05:04:24.043709 kubelet[2784]: E1104 05:04:24.043646 2784 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:43438->127.0.0.1:36241: write tcp 127.0.0.1:43438->127.0.0.1:36241: write: broken pipe Nov 4 05:04:24.437435 kubelet[2784]: E1104 05:04:24.437158 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:04:25.257194 kubelet[2784]: E1104 05:04:25.257157 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:04:25.376568 systemd-networkd[1518]: lxc_health: Link UP Nov 4 05:04:25.383244 systemd-networkd[1518]: lxc_health: Gained carrier Nov 4 05:04:26.437746 kubelet[2784]: E1104 05:04:26.437549 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:04:26.788116 kubelet[2784]: E1104 05:04:26.787466 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:04:27.135394 systemd-networkd[1518]: lxc_health: Gained IPv6LL Nov 4 05:04:27.256040 kubelet[2784]: E1104 05:04:27.254937 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:04:27.265712 containerd[1605]: time="2025-11-04T05:04:27.265050249Z" level=info msg="StopPodSandbox for \"30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab\"" Nov 4 05:04:27.267165 containerd[1605]: time="2025-11-04T05:04:27.266995983Z" level=info msg="TearDown network for sandbox \"30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab\" successfully" Nov 4 05:04:27.267165 containerd[1605]: time="2025-11-04T05:04:27.267062323Z" level=info msg="StopPodSandbox for \"30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab\" returns successfully" Nov 4 05:04:27.269017 containerd[1605]: time="2025-11-04T05:04:27.268304406Z" level=info msg="RemovePodSandbox for \"30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab\"" Nov 4 05:04:27.269017 containerd[1605]: time="2025-11-04T05:04:27.268329276Z" level=info msg="Forcibly stopping sandbox \"30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab\"" Nov 4 05:04:27.269017 containerd[1605]: time="2025-11-04T05:04:27.268485637Z" level=info msg="TearDown network for sandbox \"30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab\" successfully" Nov 4 05:04:27.272211 containerd[1605]: time="2025-11-04T05:04:27.272190294Z" level=info msg="Ensure that sandbox 30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab in task-service has been cleanup successfully" Nov 4 05:04:27.274596 containerd[1605]: time="2025-11-04T05:04:27.274575980Z" level=info msg="RemovePodSandbox \"30eb96d60aa4a182fa856d876b747f66c657e1c8b3255e769ebc49d5e8dde4ab\" returns successfully" Nov 4 05:04:27.275099 containerd[1605]: time="2025-11-04T05:04:27.275062491Z" level=info msg="StopPodSandbox for \"864e54aa58623fef61eb6f27accb4a6ee992419b082d178ec658b8746747bf62\"" Nov 4 05:04:27.275679 containerd[1605]: time="2025-11-04T05:04:27.275625802Z" level=info msg="TearDown network for sandbox \"864e54aa58623fef61eb6f27accb4a6ee992419b082d178ec658b8746747bf62\" successfully" Nov 4 05:04:27.275679 containerd[1605]: time="2025-11-04T05:04:27.275656182Z" level=info msg="StopPodSandbox for \"864e54aa58623fef61eb6f27accb4a6ee992419b082d178ec658b8746747bf62\" returns successfully" Nov 4 05:04:27.276444 containerd[1605]: time="2025-11-04T05:04:27.276014673Z" level=info msg="RemovePodSandbox for \"864e54aa58623fef61eb6f27accb4a6ee992419b082d178ec658b8746747bf62\"" Nov 4 05:04:27.276444 containerd[1605]: time="2025-11-04T05:04:27.276045503Z" level=info msg="Forcibly stopping sandbox \"864e54aa58623fef61eb6f27accb4a6ee992419b082d178ec658b8746747bf62\"" Nov 4 05:04:27.276444 containerd[1605]: time="2025-11-04T05:04:27.276320893Z" level=info msg="TearDown network for sandbox \"864e54aa58623fef61eb6f27accb4a6ee992419b082d178ec658b8746747bf62\" successfully" Nov 4 05:04:27.278301 containerd[1605]: time="2025-11-04T05:04:27.278280178Z" level=info msg="Ensure that sandbox 864e54aa58623fef61eb6f27accb4a6ee992419b082d178ec658b8746747bf62 in task-service has been cleanup successfully" Nov 4 05:04:27.280206 containerd[1605]: time="2025-11-04T05:04:27.280188112Z" level=info msg="RemovePodSandbox \"864e54aa58623fef61eb6f27accb4a6ee992419b082d178ec658b8746747bf62\" returns successfully" Nov 4 05:04:27.796256 kubelet[2784]: E1104 05:04:27.794855 2784 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 4 05:04:30.608727 sshd[4510]: Connection closed by 139.178.89.65 port 51820 Nov 4 05:04:30.609549 sshd-session[4507]: pam_unix(sshd:session): session closed for user core Nov 4 05:04:30.614490 systemd[1]: sshd@23-172.237.150.130:22-139.178.89.65:51820.service: Deactivated successfully. Nov 4 05:04:30.616747 systemd[1]: session-24.scope: Deactivated successfully. Nov 4 05:04:30.618207 systemd-logind[1584]: Session 24 logged out. Waiting for processes to exit. Nov 4 05:04:30.620500 systemd-logind[1584]: Removed session 24.