Aug 13 01:24:41.868028 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:42:48 -00 2025 Aug 13 01:24:41.868045 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:24:41.868052 kernel: BIOS-provided physical RAM map: Aug 13 01:24:41.868072 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Aug 13 01:24:41.868077 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Aug 13 01:24:41.868081 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 01:24:41.868087 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Aug 13 01:24:41.868091 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Aug 13 01:24:41.868096 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 01:24:41.868100 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 01:24:41.868105 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 01:24:41.868110 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 01:24:41.868116 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Aug 13 01:24:41.868120 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 01:24:41.868126 kernel: NX (Execute Disable) protection: active Aug 13 01:24:41.868131 kernel: APIC: Static calls initialized Aug 13 01:24:41.868136 kernel: SMBIOS 2.8 present. Aug 13 01:24:41.868142 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Aug 13 01:24:41.868147 kernel: DMI: Memory slots populated: 1/1 Aug 13 01:24:41.868152 kernel: Hypervisor detected: KVM Aug 13 01:24:41.868157 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 01:24:41.868162 kernel: kvm-clock: using sched offset of 5175088091 cycles Aug 13 01:24:41.868167 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 01:24:41.868172 kernel: tsc: Detected 1999.999 MHz processor Aug 13 01:24:41.868178 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 01:24:41.868183 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 01:24:41.868188 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Aug 13 01:24:41.868195 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 01:24:41.868200 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 01:24:41.868205 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Aug 13 01:24:41.868210 kernel: Using GB pages for direct mapping Aug 13 01:24:41.868215 kernel: ACPI: Early table checksum verification disabled Aug 13 01:24:41.868220 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Aug 13 01:24:41.868225 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:24:41.868230 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:24:41.868235 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:24:41.868241 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 01:24:41.868246 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:24:41.868251 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:24:41.868257 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:24:41.868264 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:24:41.868270 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Aug 13 01:24:41.868276 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Aug 13 01:24:41.868282 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 01:24:41.868287 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Aug 13 01:24:41.868292 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Aug 13 01:24:41.868298 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Aug 13 01:24:41.868303 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Aug 13 01:24:41.868308 kernel: No NUMA configuration found Aug 13 01:24:41.868328 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Aug 13 01:24:41.868336 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Aug 13 01:24:41.868341 kernel: Zone ranges: Aug 13 01:24:41.868346 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 01:24:41.868362 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 01:24:41.868376 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:24:41.868382 kernel: Device empty Aug 13 01:24:41.868405 kernel: Movable zone start for each node Aug 13 01:24:41.868411 kernel: Early memory node ranges Aug 13 01:24:41.868416 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 01:24:41.868421 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Aug 13 01:24:41.868429 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:24:41.868434 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Aug 13 01:24:41.868439 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 01:24:41.868444 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 01:24:41.868459 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Aug 13 01:24:41.868474 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 01:24:41.868488 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 01:24:41.868494 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 01:24:41.868526 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 01:24:41.868534 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 01:24:41.868540 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 01:24:41.868545 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 01:24:41.868550 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 01:24:41.868555 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 01:24:41.868561 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 01:24:41.868566 kernel: TSC deadline timer available Aug 13 01:24:41.868571 kernel: CPU topo: Max. logical packages: 1 Aug 13 01:24:41.868576 kernel: CPU topo: Max. logical dies: 1 Aug 13 01:24:41.868583 kernel: CPU topo: Max. dies per package: 1 Aug 13 01:24:41.868588 kernel: CPU topo: Max. threads per core: 1 Aug 13 01:24:41.868593 kernel: CPU topo: Num. cores per package: 2 Aug 13 01:24:41.868598 kernel: CPU topo: Num. threads per package: 2 Aug 13 01:24:41.868604 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Aug 13 01:24:41.868609 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 01:24:41.868614 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 01:24:41.868619 kernel: kvm-guest: setup PV sched yield Aug 13 01:24:41.868624 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 01:24:41.868631 kernel: Booting paravirtualized kernel on KVM Aug 13 01:24:41.868637 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 01:24:41.868642 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 01:24:41.868647 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Aug 13 01:24:41.868653 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Aug 13 01:24:41.868658 kernel: pcpu-alloc: [0] 0 1 Aug 13 01:24:41.868663 kernel: kvm-guest: PV spinlocks enabled Aug 13 01:24:41.868668 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 01:24:41.868674 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:24:41.868681 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 01:24:41.868687 kernel: random: crng init done Aug 13 01:24:41.868692 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 01:24:41.868697 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 01:24:41.868702 kernel: Fallback order for Node 0: 0 Aug 13 01:24:41.868708 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Aug 13 01:24:41.868713 kernel: Policy zone: Normal Aug 13 01:24:41.868718 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 01:24:41.868725 kernel: software IO TLB: area num 2. Aug 13 01:24:41.868730 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 01:24:41.868735 kernel: ftrace: allocating 40098 entries in 157 pages Aug 13 01:24:41.868741 kernel: ftrace: allocated 157 pages with 5 groups Aug 13 01:24:41.868746 kernel: Dynamic Preempt: voluntary Aug 13 01:24:41.868751 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 01:24:41.868757 kernel: rcu: RCU event tracing is enabled. Aug 13 01:24:41.868763 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 01:24:41.868768 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 01:24:41.868773 kernel: Rude variant of Tasks RCU enabled. Aug 13 01:24:41.868780 kernel: Tracing variant of Tasks RCU enabled. Aug 13 01:24:41.868785 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 01:24:41.868791 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 01:24:41.868796 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:24:41.868806 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:24:41.868814 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:24:41.868819 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 01:24:41.868825 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 01:24:41.868830 kernel: Console: colour VGA+ 80x25 Aug 13 01:24:41.868835 kernel: printk: legacy console [tty0] enabled Aug 13 01:24:41.868841 kernel: printk: legacy console [ttyS0] enabled Aug 13 01:24:41.868848 kernel: ACPI: Core revision 20240827 Aug 13 01:24:41.868853 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 01:24:41.868859 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 01:24:41.868864 kernel: x2apic enabled Aug 13 01:24:41.868870 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 01:24:41.868877 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 01:24:41.868883 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 01:24:41.868888 kernel: kvm-guest: setup PV IPIs Aug 13 01:24:41.868894 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 01:24:41.868900 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Aug 13 01:24:41.868905 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Aug 13 01:24:41.868911 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 01:24:41.868916 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 01:24:41.868922 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 01:24:41.868929 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 01:24:41.868934 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 01:24:41.868940 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 01:24:41.868945 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 01:24:41.868951 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 01:24:41.868956 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 01:24:41.868962 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 01:24:41.868968 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 01:24:41.868975 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 01:24:41.868981 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Aug 13 01:24:41.868986 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 01:24:41.868992 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 01:24:41.868997 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 01:24:41.869003 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 01:24:41.869008 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 13 01:24:41.869014 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 01:24:41.869019 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Aug 13 01:24:41.869026 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Aug 13 01:24:41.869032 kernel: Freeing SMP alternatives memory: 32K Aug 13 01:24:41.869037 kernel: pid_max: default: 32768 minimum: 301 Aug 13 01:24:41.869042 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 13 01:24:41.869048 kernel: landlock: Up and running. Aug 13 01:24:41.869053 kernel: SELinux: Initializing. Aug 13 01:24:41.869059 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:24:41.869064 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:24:41.869070 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Aug 13 01:24:41.869077 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 01:24:41.869082 kernel: ... version: 0 Aug 13 01:24:41.869088 kernel: ... bit width: 48 Aug 13 01:24:41.869093 kernel: ... generic registers: 6 Aug 13 01:24:41.869099 kernel: ... value mask: 0000ffffffffffff Aug 13 01:24:41.869104 kernel: ... max period: 00007fffffffffff Aug 13 01:24:41.869110 kernel: ... fixed-purpose events: 0 Aug 13 01:24:41.869130 kernel: ... event mask: 000000000000003f Aug 13 01:24:41.869140 kernel: signal: max sigframe size: 3376 Aug 13 01:24:41.869160 kernel: rcu: Hierarchical SRCU implementation. Aug 13 01:24:41.869174 kernel: rcu: Max phase no-delay instances is 400. Aug 13 01:24:41.869180 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 13 01:24:41.869185 kernel: smp: Bringing up secondary CPUs ... Aug 13 01:24:41.869191 kernel: smpboot: x86: Booting SMP configuration: Aug 13 01:24:41.869196 kernel: .... node #0, CPUs: #1 Aug 13 01:24:41.869202 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 01:24:41.869207 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Aug 13 01:24:41.869213 kernel: Memory: 3961808K/4193772K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54444K init, 2524K bss, 227288K reserved, 0K cma-reserved) Aug 13 01:24:41.869220 kernel: devtmpfs: initialized Aug 13 01:24:41.869225 kernel: x86/mm: Memory block size: 128MB Aug 13 01:24:41.869231 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 01:24:41.869237 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 01:24:41.869242 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 01:24:41.869248 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 01:24:41.869253 kernel: audit: initializing netlink subsys (disabled) Aug 13 01:24:41.869259 kernel: audit: type=2000 audit(1755048280.291:1): state=initialized audit_enabled=0 res=1 Aug 13 01:24:41.869264 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 01:24:41.869271 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 01:24:41.869277 kernel: cpuidle: using governor menu Aug 13 01:24:41.869282 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 01:24:41.869288 kernel: dca service started, version 1.12.1 Aug 13 01:24:41.869293 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Aug 13 01:24:41.869299 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 01:24:41.869304 kernel: PCI: Using configuration type 1 for base access Aug 13 01:24:41.869310 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 01:24:41.869315 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 01:24:41.869322 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 01:24:41.869328 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 01:24:41.869333 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 01:24:41.869339 kernel: ACPI: Added _OSI(Module Device) Aug 13 01:24:41.869344 kernel: ACPI: Added _OSI(Processor Device) Aug 13 01:24:41.869350 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 01:24:41.869355 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 01:24:41.869360 kernel: ACPI: Interpreter enabled Aug 13 01:24:41.869366 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 01:24:41.869373 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 01:24:41.869378 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 01:24:41.869384 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 01:24:41.869389 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 01:24:41.869395 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 01:24:41.869612 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 01:24:41.869713 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 01:24:41.869808 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 01:24:41.869816 kernel: PCI host bridge to bus 0000:00 Aug 13 01:24:41.869915 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 01:24:41.870007 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 01:24:41.870091 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 01:24:41.870172 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 13 01:24:41.870260 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 01:24:41.870341 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Aug 13 01:24:41.870427 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 01:24:41.870563 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Aug 13 01:24:41.870672 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Aug 13 01:24:41.870764 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Aug 13 01:24:41.870854 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Aug 13 01:24:41.870944 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Aug 13 01:24:41.871044 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 01:24:41.871146 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Aug 13 01:24:41.871245 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Aug 13 01:24:41.871335 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Aug 13 01:24:41.871424 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 01:24:41.873417 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 13 01:24:41.873555 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Aug 13 01:24:41.873658 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Aug 13 01:24:41.873750 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 01:24:41.873840 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Aug 13 01:24:41.873943 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Aug 13 01:24:41.874033 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 01:24:41.874132 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Aug 13 01:24:41.874225 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Aug 13 01:24:41.874313 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Aug 13 01:24:41.874411 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Aug 13 01:24:41.874520 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Aug 13 01:24:41.874530 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 01:24:41.874536 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 01:24:41.874541 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 01:24:41.874550 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 01:24:41.874556 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 01:24:41.874561 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 01:24:41.874567 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 01:24:41.874573 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 01:24:41.874578 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 01:24:41.874584 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 01:24:41.874589 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 01:24:41.874595 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 01:24:41.874602 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 01:24:41.874607 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 01:24:41.874613 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 01:24:41.874619 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 01:24:41.874624 kernel: iommu: Default domain type: Translated Aug 13 01:24:41.874630 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 01:24:41.874636 kernel: PCI: Using ACPI for IRQ routing Aug 13 01:24:41.874642 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 01:24:41.874647 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Aug 13 01:24:41.874655 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Aug 13 01:24:41.874748 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 01:24:41.874836 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 01:24:41.874923 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 01:24:41.874931 kernel: vgaarb: loaded Aug 13 01:24:41.874937 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 01:24:41.874943 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 01:24:41.874948 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 01:24:41.874954 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 01:24:41.874962 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 01:24:41.874967 kernel: pnp: PnP ACPI init Aug 13 01:24:41.875072 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 01:24:41.875081 kernel: pnp: PnP ACPI: found 5 devices Aug 13 01:24:41.875087 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 01:24:41.875093 kernel: NET: Registered PF_INET protocol family Aug 13 01:24:41.875098 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 01:24:41.875104 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 01:24:41.875112 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 01:24:41.875118 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 01:24:41.875124 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 01:24:41.875129 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 01:24:41.875136 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:24:41.875141 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:24:41.875159 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 01:24:41.875174 kernel: NET: Registered PF_XDP protocol family Aug 13 01:24:41.875305 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 01:24:41.875400 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 01:24:41.877567 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 01:24:41.877661 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 13 01:24:41.877744 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 01:24:41.877824 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Aug 13 01:24:41.877832 kernel: PCI: CLS 0 bytes, default 64 Aug 13 01:24:41.877838 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 01:24:41.877843 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Aug 13 01:24:41.877852 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Aug 13 01:24:41.877859 kernel: Initialise system trusted keyrings Aug 13 01:24:41.877864 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 01:24:41.877870 kernel: Key type asymmetric registered Aug 13 01:24:41.877875 kernel: Asymmetric key parser 'x509' registered Aug 13 01:24:41.877881 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 01:24:41.877886 kernel: io scheduler mq-deadline registered Aug 13 01:24:41.877892 kernel: io scheduler kyber registered Aug 13 01:24:41.877898 kernel: io scheduler bfq registered Aug 13 01:24:41.877906 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 01:24:41.877912 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 01:24:41.877918 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 01:24:41.877923 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 01:24:41.877930 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 01:24:41.877935 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 01:24:41.877941 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 01:24:41.877947 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 01:24:41.877953 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 01:24:41.878050 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 01:24:41.878148 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 01:24:41.878232 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T01:24:41 UTC (1755048281) Aug 13 01:24:41.878321 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 01:24:41.878329 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 01:24:41.878335 kernel: NET: Registered PF_INET6 protocol family Aug 13 01:24:41.878340 kernel: Segment Routing with IPv6 Aug 13 01:24:41.878347 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 01:24:41.878354 kernel: NET: Registered PF_PACKET protocol family Aug 13 01:24:41.878360 kernel: Key type dns_resolver registered Aug 13 01:24:41.878366 kernel: IPI shorthand broadcast: enabled Aug 13 01:24:41.878372 kernel: sched_clock: Marking stable (2353004937, 184702547)->(2586752633, -49045149) Aug 13 01:24:41.878377 kernel: registered taskstats version 1 Aug 13 01:24:41.878383 kernel: Loading compiled-in X.509 certificates Aug 13 01:24:41.878388 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: dee0b464d3f7f8d09744a2392f69dde258bc95c0' Aug 13 01:24:41.878394 kernel: Demotion targets for Node 0: null Aug 13 01:24:41.878400 kernel: Key type .fscrypt registered Aug 13 01:24:41.878407 kernel: Key type fscrypt-provisioning registered Aug 13 01:24:41.878412 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 01:24:41.878418 kernel: ima: Allocated hash algorithm: sha1 Aug 13 01:24:41.878423 kernel: ima: No architecture policies found Aug 13 01:24:41.878429 kernel: clk: Disabling unused clocks Aug 13 01:24:41.878434 kernel: Warning: unable to open an initial console. Aug 13 01:24:41.878441 kernel: Freeing unused kernel image (initmem) memory: 54444K Aug 13 01:24:41.878446 kernel: Write protecting the kernel read-only data: 24576k Aug 13 01:24:41.878452 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 13 01:24:41.878459 kernel: Run /init as init process Aug 13 01:24:41.878464 kernel: with arguments: Aug 13 01:24:41.878470 kernel: /init Aug 13 01:24:41.878475 kernel: with environment: Aug 13 01:24:41.878481 kernel: HOME=/ Aug 13 01:24:41.878513 kernel: TERM=linux Aug 13 01:24:41.878521 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 01:24:41.878528 systemd[1]: Successfully made /usr/ read-only. Aug 13 01:24:41.878539 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:24:41.878545 systemd[1]: Detected virtualization kvm. Aug 13 01:24:41.878552 systemd[1]: Detected architecture x86-64. Aug 13 01:24:41.878558 systemd[1]: Running in initrd. Aug 13 01:24:41.878564 systemd[1]: No hostname configured, using default hostname. Aug 13 01:24:41.878570 systemd[1]: Hostname set to . Aug 13 01:24:41.878576 systemd[1]: Initializing machine ID from random generator. Aug 13 01:24:41.878583 systemd[1]: Queued start job for default target initrd.target. Aug 13 01:24:41.878591 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:24:41.878597 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:24:41.878604 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 01:24:41.878610 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:24:41.878616 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 01:24:41.878624 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 01:24:41.878633 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 01:24:41.878639 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 01:24:41.878646 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:24:41.878652 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:24:41.878658 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:24:41.878665 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:24:41.878671 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:24:41.878677 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:24:41.878683 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:24:41.878691 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:24:41.878697 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 01:24:41.878704 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 01:24:41.878710 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:24:41.878716 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:24:41.878722 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:24:41.878728 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:24:41.878736 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 01:24:41.878742 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:24:41.878749 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 01:24:41.878755 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 13 01:24:41.878761 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 01:24:41.878767 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:24:41.878774 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:24:41.878782 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:24:41.878788 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 01:24:41.878795 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:24:41.878801 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 01:24:41.878809 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 01:24:41.878833 systemd-journald[206]: Collecting audit messages is disabled. Aug 13 01:24:41.878850 systemd-journald[206]: Journal started Aug 13 01:24:41.878867 systemd-journald[206]: Runtime Journal (/run/log/journal/97ed8d5d216b46bda2d48ba470ba198e) is 8M, max 78.5M, 70.5M free. Aug 13 01:24:41.869492 systemd-modules-load[207]: Inserted module 'overlay' Aug 13 01:24:41.881596 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:24:41.901651 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:24:41.907517 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 01:24:41.909069 systemd-modules-load[207]: Inserted module 'br_netfilter' Aug 13 01:24:41.958100 kernel: Bridge firewalling registered Aug 13 01:24:41.910665 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:24:41.959685 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:24:41.966986 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:24:41.969327 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:24:41.974627 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 01:24:41.977397 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:24:41.978655 systemd-tmpfiles[222]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 13 01:24:41.980304 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:24:41.986782 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:24:41.996788 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:24:41.999700 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:24:42.001770 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:24:42.003592 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 01:24:42.022113 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:24:42.037536 systemd-resolved[243]: Positive Trust Anchors: Aug 13 01:24:42.038135 systemd-resolved[243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:24:42.038161 systemd-resolved[243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:24:42.042660 systemd-resolved[243]: Defaulting to hostname 'linux'. Aug 13 01:24:42.043525 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:24:42.044238 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:24:42.110545 kernel: SCSI subsystem initialized Aug 13 01:24:42.117561 kernel: Loading iSCSI transport class v2.0-870. Aug 13 01:24:42.127537 kernel: iscsi: registered transport (tcp) Aug 13 01:24:42.144538 kernel: iscsi: registered transport (qla4xxx) Aug 13 01:24:42.144571 kernel: QLogic iSCSI HBA Driver Aug 13 01:24:42.164764 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 01:24:42.177122 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:24:42.179987 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 01:24:42.235230 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 01:24:42.237882 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 01:24:42.289525 kernel: raid6: avx2x4 gen() 24698 MB/s Aug 13 01:24:42.306522 kernel: raid6: avx2x2 gen() 20929 MB/s Aug 13 01:24:42.324906 kernel: raid6: avx2x1 gen() 14542 MB/s Aug 13 01:24:42.324921 kernel: raid6: using algorithm avx2x4 gen() 24698 MB/s Aug 13 01:24:42.343893 kernel: raid6: .... xor() 2846 MB/s, rmw enabled Aug 13 01:24:42.343920 kernel: raid6: using avx2x2 recovery algorithm Aug 13 01:24:42.363553 kernel: xor: automatically using best checksumming function avx Aug 13 01:24:42.472538 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 01:24:42.481381 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:24:42.483947 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:24:42.515899 systemd-udevd[454]: Using default interface naming scheme 'v255'. Aug 13 01:24:42.520975 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:24:42.523634 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 01:24:42.548115 dracut-pre-trigger[459]: rd.md=0: removing MD RAID activation Aug 13 01:24:42.577089 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:24:42.579299 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:24:42.641082 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:24:42.644862 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 01:24:42.702519 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 01:24:42.715559 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Aug 13 01:24:42.844134 kernel: scsi host0: Virtio SCSI HBA Aug 13 01:24:42.853802 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Aug 13 01:24:42.877524 kernel: AES CTR mode by8 optimization enabled Aug 13 01:24:42.879521 kernel: libata version 3.00 loaded. Aug 13 01:24:42.895818 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:24:42.895953 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:24:42.900217 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:24:42.903939 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:24:42.907156 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:24:42.923399 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Aug 13 01:24:42.930534 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 01:24:42.933530 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 01:24:42.937156 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Aug 13 01:24:42.937320 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Aug 13 01:24:42.937452 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 01:24:42.939867 kernel: sd 0:0:0:0: Power-on or device reset occurred Aug 13 01:24:42.943258 kernel: scsi host1: ahci Aug 13 01:24:42.943424 kernel: sd 0:0:0:0: [sda] 9297920 512-byte logical blocks: (4.76 GB/4.43 GiB) Aug 13 01:24:42.946672 kernel: scsi host2: ahci Aug 13 01:24:42.946870 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 01:24:42.947028 kernel: scsi host3: ahci Aug 13 01:24:42.947171 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Aug 13 01:24:42.947309 kernel: scsi host4: ahci Aug 13 01:24:42.947447 kernel: scsi host5: ahci Aug 13 01:24:42.947473 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 01:24:42.952518 kernel: scsi host6: ahci Aug 13 01:24:42.953152 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 0 Aug 13 01:24:42.953171 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 0 Aug 13 01:24:42.953182 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 0 Aug 13 01:24:42.953196 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 0 Aug 13 01:24:42.953204 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 0 Aug 13 01:24:42.953213 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 0 Aug 13 01:24:42.958463 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 01:24:42.958483 kernel: GPT:9289727 != 9297919 Aug 13 01:24:42.958492 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 01:24:42.958515 kernel: GPT:9289727 != 9297919 Aug 13 01:24:42.958524 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 01:24:42.958533 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:24:42.958545 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 01:24:43.038056 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:24:43.265545 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 01:24:43.265579 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 01:24:43.265590 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 01:24:43.265599 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 01:24:43.265608 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 01:24:43.265622 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 01:24:43.321869 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Aug 13 01:24:43.328777 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:24:43.335888 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Aug 13 01:24:43.336596 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 01:24:43.343538 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Aug 13 01:24:43.344061 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Aug 13 01:24:43.346410 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:24:43.346933 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:24:43.348009 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:24:43.349656 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 01:24:43.351976 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 01:24:43.369113 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:24:43.373114 disk-uuid[631]: Primary Header is updated. Aug 13 01:24:43.373114 disk-uuid[631]: Secondary Entries is updated. Aug 13 01:24:43.373114 disk-uuid[631]: Secondary Header is updated. Aug 13 01:24:43.374550 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:24:44.397249 disk-uuid[639]: The operation has completed successfully. Aug 13 01:24:44.398002 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:24:44.449327 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 01:24:44.449437 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 01:24:44.472241 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 01:24:44.486362 sh[653]: Success Aug 13 01:24:44.505136 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 01:24:44.505161 kernel: device-mapper: uevent: version 1.0.3 Aug 13 01:24:44.508540 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 13 01:24:44.516630 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Aug 13 01:24:44.561739 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 01:24:44.564272 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 01:24:44.582309 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 01:24:44.592533 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 13 01:24:44.592554 kernel: BTRFS: device fsid 0c0338fb-9434-41c1-99a2-737cbe2351c4 devid 1 transid 44 /dev/mapper/usr (254:0) scanned by mount (665) Aug 13 01:24:44.596558 kernel: BTRFS info (device dm-0): first mount of filesystem 0c0338fb-9434-41c1-99a2-737cbe2351c4 Aug 13 01:24:44.600398 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:24:44.600420 kernel: BTRFS info (device dm-0): using free-space-tree Aug 13 01:24:44.609704 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 01:24:44.610722 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 13 01:24:44.611459 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 01:24:44.612093 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 01:24:44.615485 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 01:24:44.638549 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (698) Aug 13 01:24:44.641519 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:24:44.644997 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:24:44.645017 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:24:44.653546 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:24:44.655078 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 01:24:44.657094 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 01:24:44.735608 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:24:44.743054 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:24:44.753079 ignition[763]: Ignition 2.21.0 Aug 13 01:24:44.753093 ignition[763]: Stage: fetch-offline Aug 13 01:24:44.753120 ignition[763]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:24:44.753128 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:24:44.754779 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:24:44.753189 ignition[763]: parsed url from cmdline: "" Aug 13 01:24:44.753192 ignition[763]: no config URL provided Aug 13 01:24:44.753195 ignition[763]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:24:44.753202 ignition[763]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:24:44.753206 ignition[763]: failed to fetch config: resource requires networking Aug 13 01:24:44.753430 ignition[763]: Ignition finished successfully Aug 13 01:24:44.776639 systemd-networkd[839]: lo: Link UP Aug 13 01:24:44.776649 systemd-networkd[839]: lo: Gained carrier Aug 13 01:24:44.777830 systemd-networkd[839]: Enumeration completed Aug 13 01:24:44.778134 systemd-networkd[839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:24:44.778138 systemd-networkd[839]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:24:44.778652 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:24:44.779611 systemd-networkd[839]: eth0: Link UP Aug 13 01:24:44.779744 systemd-networkd[839]: eth0: Gained carrier Aug 13 01:24:44.779752 systemd-networkd[839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:24:44.780137 systemd[1]: Reached target network.target - Network. Aug 13 01:24:44.782253 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 01:24:44.815479 ignition[843]: Ignition 2.21.0 Aug 13 01:24:44.815489 ignition[843]: Stage: fetch Aug 13 01:24:44.815648 ignition[843]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:24:44.815662 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:24:44.815749 ignition[843]: parsed url from cmdline: "" Aug 13 01:24:44.815753 ignition[843]: no config URL provided Aug 13 01:24:44.815758 ignition[843]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:24:44.815766 ignition[843]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:24:44.815792 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #1 Aug 13 01:24:44.816291 ignition[843]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:24:45.016443 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #2 Aug 13 01:24:45.016614 ignition[843]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:24:45.342605 systemd-networkd[839]: eth0: DHCPv4 address 172.233.222.9/24, gateway 172.233.222.1 acquired from 23.40.197.134 Aug 13 01:24:45.416845 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #3 Aug 13 01:24:45.523228 ignition[843]: PUT result: OK Aug 13 01:24:45.523793 ignition[843]: GET http://169.254.169.254/v1/user-data: attempt #1 Aug 13 01:24:45.682704 ignition[843]: GET result: OK Aug 13 01:24:45.682781 ignition[843]: parsing config with SHA512: a7f0e141341c7722d37ada456b275e48fb1a529730361f775bf786d827deacccb1daca759d41af64a227a64e4dbdbadf9c4cf4f34dd418cb33d22dd3608df8cb Aug 13 01:24:45.685775 unknown[843]: fetched base config from "system" Aug 13 01:24:45.685785 unknown[843]: fetched base config from "system" Aug 13 01:24:45.686038 ignition[843]: fetch: fetch complete Aug 13 01:24:45.685790 unknown[843]: fetched user config from "akamai" Aug 13 01:24:45.686042 ignition[843]: fetch: fetch passed Aug 13 01:24:45.686083 ignition[843]: Ignition finished successfully Aug 13 01:24:45.689753 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 01:24:45.693609 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 01:24:45.728479 ignition[850]: Ignition 2.21.0 Aug 13 01:24:45.728494 ignition[850]: Stage: kargs Aug 13 01:24:45.728619 ignition[850]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:24:45.728630 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:24:45.729163 ignition[850]: kargs: kargs passed Aug 13 01:24:45.730463 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 01:24:45.729196 ignition[850]: Ignition finished successfully Aug 13 01:24:45.733026 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 01:24:45.752941 ignition[857]: Ignition 2.21.0 Aug 13 01:24:45.752951 ignition[857]: Stage: disks Aug 13 01:24:45.753066 ignition[857]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:24:45.753076 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:24:45.753772 ignition[857]: disks: disks passed Aug 13 01:24:45.755050 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 01:24:45.753807 ignition[857]: Ignition finished successfully Aug 13 01:24:45.756168 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 01:24:45.756991 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 01:24:45.758043 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:24:45.759024 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:24:45.760168 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:24:45.762066 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 01:24:45.785080 systemd-fsck[865]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 13 01:24:45.788605 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 01:24:45.790485 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 01:24:45.888521 kernel: EXT4-fs (sda9): mounted filesystem 069caac6-7833-4acd-8940-01a7ff7d1281 r/w with ordered data mode. Quota mode: none. Aug 13 01:24:45.889412 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 01:24:45.890286 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 01:24:45.892159 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:24:45.895402 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 01:24:45.896656 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 01:24:45.896696 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 01:24:45.896720 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:24:45.905281 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 01:24:45.906576 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 01:24:45.916786 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (873) Aug 13 01:24:45.916812 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:24:45.919835 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:24:45.922024 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:24:45.925946 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:24:45.949707 initrd-setup-root[897]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 01:24:45.953793 initrd-setup-root[904]: cut: /sysroot/etc/group: No such file or directory Aug 13 01:24:45.957120 initrd-setup-root[911]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 01:24:45.960440 initrd-setup-root[918]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 01:24:46.026641 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 01:24:46.028700 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 01:24:46.030304 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 01:24:46.040728 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 01:24:46.043076 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:24:46.058606 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 01:24:46.061866 ignition[985]: INFO : Ignition 2.21.0 Aug 13 01:24:46.063098 ignition[985]: INFO : Stage: mount Aug 13 01:24:46.063098 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:24:46.063098 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:24:46.063098 ignition[985]: INFO : mount: mount passed Aug 13 01:24:46.063098 ignition[985]: INFO : Ignition finished successfully Aug 13 01:24:46.065962 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 01:24:46.067298 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 01:24:46.788701 systemd-networkd[839]: eth0: Gained IPv6LL Aug 13 01:24:46.891221 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:24:46.923608 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (997) Aug 13 01:24:46.923671 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:24:46.925811 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:24:46.928335 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:24:46.933054 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:24:46.960932 ignition[1013]: INFO : Ignition 2.21.0 Aug 13 01:24:46.960932 ignition[1013]: INFO : Stage: files Aug 13 01:24:46.962057 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:24:46.962057 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:24:46.962057 ignition[1013]: DEBUG : files: compiled without relabeling support, skipping Aug 13 01:24:46.963966 ignition[1013]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 01:24:46.963966 ignition[1013]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 01:24:46.965288 ignition[1013]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 01:24:46.965288 ignition[1013]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 01:24:46.965288 ignition[1013]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 01:24:46.964520 unknown[1013]: wrote ssh authorized keys file for user: core Aug 13 01:24:46.967867 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 01:24:46.967867 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 01:24:48.042601 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 01:24:48.761910 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 01:24:48.763179 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 01:24:48.763179 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 01:24:48.898604 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 01:24:48.986540 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 01:24:48.986540 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 01:24:48.986540 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 01:24:48.986540 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:24:48.986540 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:24:48.992203 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:24:48.992203 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:24:48.992203 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:24:48.992203 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:24:48.992203 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:24:48.992203 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:24:48.992203 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:24:48.992203 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:24:48.992203 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:24:48.992203 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 01:24:49.232659 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 01:24:49.558075 ignition[1013]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:24:49.558075 ignition[1013]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 01:24:49.559963 ignition[1013]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:24:49.560803 ignition[1013]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:24:49.560803 ignition[1013]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 01:24:49.560803 ignition[1013]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 13 01:24:49.560803 ignition[1013]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:24:49.560803 ignition[1013]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:24:49.560803 ignition[1013]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 13 01:24:49.560803 ignition[1013]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Aug 13 01:24:49.560803 ignition[1013]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 01:24:49.560803 ignition[1013]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:24:49.560803 ignition[1013]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:24:49.560803 ignition[1013]: INFO : files: files passed Aug 13 01:24:49.560803 ignition[1013]: INFO : Ignition finished successfully Aug 13 01:24:49.564284 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 01:24:49.566727 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 01:24:49.572618 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 01:24:49.588966 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 01:24:49.589055 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 01:24:49.593952 initrd-setup-root-after-ignition[1044]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:24:49.593952 initrd-setup-root-after-ignition[1044]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:24:49.595781 initrd-setup-root-after-ignition[1048]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:24:49.597192 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:24:49.598375 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 01:24:49.599605 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 01:24:49.636120 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 01:24:49.636212 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 01:24:49.637259 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 01:24:49.638060 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 01:24:49.639531 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 01:24:49.640168 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 01:24:49.677772 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:24:49.681411 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 01:24:49.694911 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:24:49.695476 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:24:49.696594 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 01:24:49.697639 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 01:24:49.697760 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:24:49.698868 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 01:24:49.699532 systemd[1]: Stopped target basic.target - Basic System. Aug 13 01:24:49.700546 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 01:24:49.701510 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:24:49.702400 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 01:24:49.703492 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 13 01:24:49.704536 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 01:24:49.705615 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:24:49.706677 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 01:24:49.707729 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 01:24:49.708760 systemd[1]: Stopped target swap.target - Swaps. Aug 13 01:24:49.709723 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 01:24:49.709834 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:24:49.710901 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:24:49.711609 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:24:49.712488 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 01:24:49.714575 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:24:49.715121 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 01:24:49.715203 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 01:24:49.716529 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 01:24:49.716659 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:24:49.717234 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 01:24:49.717342 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 01:24:49.718844 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 01:24:49.720790 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 01:24:49.720921 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:24:49.723386 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 01:24:49.725004 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 01:24:49.725095 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:24:49.726685 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 01:24:49.726774 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:24:49.734230 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 01:24:49.734320 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 01:24:49.756199 ignition[1068]: INFO : Ignition 2.21.0 Aug 13 01:24:49.756199 ignition[1068]: INFO : Stage: umount Aug 13 01:24:49.756199 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:24:49.756199 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:24:49.756199 ignition[1068]: INFO : umount: umount passed Aug 13 01:24:49.756199 ignition[1068]: INFO : Ignition finished successfully Aug 13 01:24:49.758061 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 01:24:49.758183 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 01:24:49.761200 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 01:24:49.761248 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 01:24:49.761925 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 01:24:49.761965 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 01:24:49.763363 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 01:24:49.763411 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 01:24:49.765111 systemd[1]: Stopped target network.target - Network. Aug 13 01:24:49.766810 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 01:24:49.766867 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:24:49.767412 systemd[1]: Stopped target paths.target - Path Units. Aug 13 01:24:49.768432 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 01:24:49.773540 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:24:49.774291 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 01:24:49.775183 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 01:24:49.776232 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 01:24:49.776270 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:24:49.777339 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 01:24:49.777375 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:24:49.778215 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 01:24:49.778264 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 01:24:49.779315 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 01:24:49.779361 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 01:24:49.780405 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 01:24:49.781381 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 01:24:49.783361 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 01:24:49.783878 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 01:24:49.783975 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 01:24:49.785170 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 01:24:49.785245 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 01:24:49.787552 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 01:24:49.787660 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 01:24:49.791728 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 01:24:49.791935 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 01:24:49.792033 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 01:24:49.793835 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 01:24:49.794446 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 13 01:24:49.795332 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 01:24:49.795371 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:24:49.796842 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 01:24:49.797959 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 01:24:49.798001 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:24:49.799848 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:24:49.799898 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:24:49.801646 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 01:24:49.801689 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 01:24:49.802400 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 01:24:49.802439 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:24:49.803597 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:24:49.808536 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 01:24:49.808588 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:24:49.817889 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 01:24:49.818001 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 01:24:49.822899 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 01:24:49.823059 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:24:49.824348 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 01:24:49.824400 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 01:24:49.825111 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 01:24:49.825143 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:24:49.826126 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 01:24:49.826167 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:24:49.827622 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 01:24:49.827663 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 01:24:49.828765 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 01:24:49.828812 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:24:49.830545 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 01:24:49.831769 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 13 01:24:49.831812 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:24:49.835025 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 01:24:49.835071 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:24:49.836899 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:24:49.836940 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:24:49.839339 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Aug 13 01:24:49.839397 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 01:24:49.839443 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:24:49.845407 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 01:24:49.845519 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 01:24:49.846761 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 01:24:49.848660 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 01:24:49.870113 systemd[1]: Switching root. Aug 13 01:24:49.901359 systemd-journald[206]: Journal stopped Aug 13 01:24:50.826319 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). Aug 13 01:24:50.826349 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 01:24:50.826363 kernel: SELinux: policy capability open_perms=1 Aug 13 01:24:50.826377 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 01:24:50.826387 kernel: SELinux: policy capability always_check_network=0 Aug 13 01:24:50.826397 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 01:24:50.826408 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 01:24:50.826419 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 01:24:50.826429 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 01:24:50.826439 kernel: SELinux: policy capability userspace_initial_context=0 Aug 13 01:24:50.826451 kernel: audit: type=1403 audit(1755048290.052:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 01:24:50.826462 systemd[1]: Successfully loaded SELinux policy in 67.454ms. Aug 13 01:24:50.826472 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.331ms. Aug 13 01:24:50.826482 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:24:50.826492 systemd[1]: Detected virtualization kvm. Aug 13 01:24:50.826530 systemd[1]: Detected architecture x86-64. Aug 13 01:24:50.826538 systemd[1]: Detected first boot. Aug 13 01:24:50.826546 systemd[1]: Initializing machine ID from random generator. Aug 13 01:24:50.826554 zram_generator::config[1115]: No configuration found. Aug 13 01:24:50.826562 kernel: Guest personality initialized and is inactive Aug 13 01:24:50.826569 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 01:24:50.826577 kernel: Initialized host personality Aug 13 01:24:50.826586 kernel: NET: Registered PF_VSOCK protocol family Aug 13 01:24:50.826594 systemd[1]: Populated /etc with preset unit settings. Aug 13 01:24:50.826603 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 01:24:50.826611 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 01:24:50.826618 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 01:24:50.826626 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 01:24:50.826634 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 01:24:50.826643 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 01:24:50.826651 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 01:24:50.826659 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 01:24:50.826667 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 01:24:50.826675 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 01:24:50.826683 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 01:24:50.826691 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 01:24:50.826701 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:24:50.826710 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:24:50.826717 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 01:24:50.826725 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 01:24:50.826736 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 01:24:50.826744 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:24:50.826752 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 01:24:50.826760 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:24:50.826770 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:24:50.826778 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 01:24:50.826786 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 01:24:50.826794 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 01:24:50.826803 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 01:24:50.826811 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:24:50.826819 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:24:50.826826 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:24:50.826836 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:24:50.826844 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 01:24:50.826852 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 01:24:50.826860 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 01:24:50.826869 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:24:50.826878 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:24:50.826887 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:24:50.826895 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 01:24:50.826904 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 01:24:50.826913 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 01:24:50.826921 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 01:24:50.826929 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:24:50.826937 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 01:24:50.826947 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 01:24:50.826955 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 01:24:50.826963 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 01:24:50.826971 systemd[1]: Reached target machines.target - Containers. Aug 13 01:24:50.826979 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 01:24:50.826987 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:24:50.826996 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:24:50.827004 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 01:24:50.827013 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:24:50.827021 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:24:50.827029 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:24:50.827037 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 01:24:50.827045 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:24:50.827054 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 01:24:50.827062 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 01:24:50.827070 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 01:24:50.827078 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 01:24:50.827087 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 01:24:50.827097 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:24:50.827105 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:24:50.827113 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:24:50.827121 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 01:24:50.827129 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 01:24:50.827137 kernel: loop: module loaded Aug 13 01:24:50.827145 kernel: fuse: init (API version 7.41) Aug 13 01:24:50.827154 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 01:24:50.827162 kernel: ACPI: bus type drm_connector registered Aug 13 01:24:50.827170 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:24:50.827178 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 01:24:50.827186 systemd[1]: Stopped verity-setup.service. Aug 13 01:24:50.827194 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:24:50.827202 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 01:24:50.827210 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 01:24:50.827220 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 01:24:50.827228 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 01:24:50.827253 systemd-journald[1199]: Collecting audit messages is disabled. Aug 13 01:24:50.827269 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 01:24:50.827278 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 01:24:50.827288 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 01:24:50.827296 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:24:50.827304 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 01:24:50.827312 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 01:24:50.827320 systemd-journald[1199]: Journal started Aug 13 01:24:50.827336 systemd-journald[1199]: Runtime Journal (/run/log/journal/b4811a6111514326b694c9ae9ab09dc5) is 8M, max 78.5M, 70.5M free. Aug 13 01:24:50.525897 systemd[1]: Queued start job for default target multi-user.target. Aug 13 01:24:50.537816 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 01:24:50.538285 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 01:24:50.830015 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:24:50.830995 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:24:50.831210 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:24:50.832823 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:24:50.833005 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:24:50.833740 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:24:50.834041 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:24:50.834783 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 01:24:50.835037 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 01:24:50.835845 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:24:50.836093 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:24:50.836890 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:24:50.837811 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:24:50.838599 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 01:24:50.839337 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 01:24:50.852275 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 01:24:50.854599 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 01:24:50.856660 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 01:24:50.857180 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 01:24:50.857242 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:24:50.859611 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 01:24:50.862607 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 01:24:50.863676 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:24:50.868687 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 01:24:50.872688 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 01:24:50.873277 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:24:50.874640 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 01:24:50.875153 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:24:50.876394 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:24:50.882040 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 01:24:50.888893 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 01:24:50.894735 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 01:24:50.895638 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 01:24:50.901453 systemd-journald[1199]: Time spent on flushing to /var/log/journal/b4811a6111514326b694c9ae9ab09dc5 is 14.078ms for 1001 entries. Aug 13 01:24:50.901453 systemd-journald[1199]: System Journal (/var/log/journal/b4811a6111514326b694c9ae9ab09dc5) is 8M, max 195.6M, 187.6M free. Aug 13 01:24:50.929228 systemd-journald[1199]: Received client request to flush runtime journal. Aug 13 01:24:50.929268 kernel: loop0: detected capacity change from 0 to 113872 Aug 13 01:24:50.912425 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:24:50.922814 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 01:24:50.924721 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 01:24:50.930677 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 01:24:50.932887 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 01:24:50.941768 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:24:50.964527 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 01:24:50.969998 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 01:24:50.981689 kernel: loop1: detected capacity change from 0 to 221472 Aug 13 01:24:50.986366 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 01:24:50.989740 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:24:51.015920 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Aug 13 01:24:51.016197 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Aug 13 01:24:51.021543 kernel: loop2: detected capacity change from 0 to 146240 Aug 13 01:24:51.023751 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:24:51.058531 kernel: loop3: detected capacity change from 0 to 8 Aug 13 01:24:51.075520 kernel: loop4: detected capacity change from 0 to 113872 Aug 13 01:24:51.089147 kernel: loop5: detected capacity change from 0 to 221472 Aug 13 01:24:51.111540 kernel: loop6: detected capacity change from 0 to 146240 Aug 13 01:24:51.130519 kernel: loop7: detected capacity change from 0 to 8 Aug 13 01:24:51.133278 (sd-merge)[1259]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Aug 13 01:24:51.134634 (sd-merge)[1259]: Merged extensions into '/usr'. Aug 13 01:24:51.140076 systemd[1]: Reload requested from client PID 1236 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 01:24:51.140164 systemd[1]: Reloading... Aug 13 01:24:51.211767 zram_generator::config[1288]: No configuration found. Aug 13 01:24:51.304430 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:24:51.362967 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 01:24:51.363315 systemd[1]: Reloading finished in 222 ms. Aug 13 01:24:51.380246 ldconfig[1231]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 01:24:51.383016 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 01:24:51.384003 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 01:24:51.391603 systemd[1]: Starting ensure-sysext.service... Aug 13 01:24:51.393681 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:24:51.413571 systemd[1]: Reload requested from client PID 1328 ('systemctl') (unit ensure-sysext.service)... Aug 13 01:24:51.413634 systemd[1]: Reloading... Aug 13 01:24:51.429098 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 13 01:24:51.429133 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 13 01:24:51.429445 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 01:24:51.431994 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 01:24:51.434010 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 01:24:51.434262 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Aug 13 01:24:51.434364 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Aug 13 01:24:51.438578 systemd-tmpfiles[1329]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:24:51.438587 systemd-tmpfiles[1329]: Skipping /boot Aug 13 01:24:51.457236 systemd-tmpfiles[1329]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:24:51.459569 systemd-tmpfiles[1329]: Skipping /boot Aug 13 01:24:51.494586 zram_generator::config[1357]: No configuration found. Aug 13 01:24:51.566572 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:24:51.623530 systemd[1]: Reloading finished in 209 ms. Aug 13 01:24:51.643138 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 01:24:51.654395 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:24:51.661615 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:24:51.663670 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 01:24:51.678899 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 01:24:51.683869 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:24:51.687679 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:24:51.690804 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 01:24:51.693922 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:24:51.694044 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:24:51.698700 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:24:51.701384 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:24:51.705706 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:24:51.706633 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:24:51.706716 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:24:51.706783 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:24:51.711155 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 01:24:51.713098 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 01:24:51.714891 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:24:51.715067 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:24:51.720142 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:24:51.723057 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:24:51.726101 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:24:51.727560 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:24:51.738489 systemd[1]: Finished ensure-sysext.service. Aug 13 01:24:51.743141 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:24:51.743780 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:24:51.745263 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:24:51.747096 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:24:51.756623 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:24:51.762600 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:24:51.763125 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:24:51.763150 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:24:51.768792 systemd-udevd[1406]: Using default interface naming scheme 'v255'. Aug 13 01:24:51.773717 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 01:24:51.778851 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 01:24:51.779466 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:24:51.781536 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 01:24:51.789188 augenrules[1442]: No rules Aug 13 01:24:51.788428 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:24:51.788659 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:24:51.791791 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:24:51.791980 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:24:51.792910 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:24:51.793555 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:24:51.794305 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:24:51.794466 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:24:51.800933 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 01:24:51.802671 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:24:51.804056 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:24:51.804622 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 01:24:51.807358 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:24:51.810216 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:24:51.810237 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:24:51.820610 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 01:24:51.833462 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:24:51.838408 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:24:51.935193 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 01:24:51.957521 systemd-resolved[1404]: Positive Trust Anchors: Aug 13 01:24:51.957794 systemd-resolved[1404]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:24:51.957863 systemd-resolved[1404]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:24:51.967213 systemd-resolved[1404]: Defaulting to hostname 'linux'. Aug 13 01:24:51.970086 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:24:51.971132 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:24:52.013588 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 01:24:52.013629 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 01:24:52.015920 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 01:24:52.016535 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:24:52.017115 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 01:24:52.017927 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 01:24:52.018727 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 13 01:24:52.019577 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 01:24:52.020265 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 01:24:52.020284 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:24:52.021159 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 01:24:52.021734 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 01:24:52.022676 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 01:24:52.023286 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:24:52.025259 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 01:24:52.026919 systemd-networkd[1466]: lo: Link UP Aug 13 01:24:52.026932 systemd-networkd[1466]: lo: Gained carrier Aug 13 01:24:52.027652 systemd-networkd[1466]: Enumeration completed Aug 13 01:24:52.027851 systemd-timesyncd[1438]: No network connectivity, watching for changes. Aug 13 01:24:52.028002 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 01:24:52.031917 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 01:24:52.033226 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 01:24:52.034256 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 01:24:52.039218 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 01:24:52.040810 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 01:24:52.043571 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:24:52.044402 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 01:24:52.045799 systemd[1]: Reached target network.target - Network. Aug 13 01:24:52.047214 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:24:52.048180 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:24:52.048968 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:24:52.048993 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:24:52.050601 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 01:24:52.072576 kernel: ACPI: button: Power Button [PWRF] Aug 13 01:24:52.082284 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 01:24:52.082481 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 01:24:52.080251 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 01:24:52.084575 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 01:24:52.087779 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 01:24:52.090786 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 01:24:52.098620 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 01:24:52.104962 systemd-networkd[1466]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:24:52.104971 systemd-networkd[1466]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:24:52.105441 systemd-networkd[1466]: eth0: Link UP Aug 13 01:24:52.105595 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 01:24:52.106734 systemd-networkd[1466]: eth0: Gained carrier Aug 13 01:24:52.106754 systemd-networkd[1466]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:24:52.108677 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 13 01:24:52.112127 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 01:24:52.114613 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 01:24:52.121749 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 01:24:52.125592 jq[1506]: false Aug 13 01:24:52.124366 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 01:24:52.128752 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 01:24:52.130017 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 01:24:52.144643 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 01:24:52.146306 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 01:24:52.147832 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 01:24:52.151997 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 01:24:52.157393 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 01:24:52.162207 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 01:24:52.163796 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 01:24:52.163995 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 01:24:52.168264 google_oslogin_nss_cache[1510]: oslogin_cache_refresh[1510]: Refreshing passwd entry cache Aug 13 01:24:52.168270 oslogin_cache_refresh[1510]: Refreshing passwd entry cache Aug 13 01:24:52.181972 oslogin_cache_refresh[1510]: Failure getting users, quitting Aug 13 01:24:52.183022 google_oslogin_nss_cache[1510]: oslogin_cache_refresh[1510]: Failure getting users, quitting Aug 13 01:24:52.183022 google_oslogin_nss_cache[1510]: oslogin_cache_refresh[1510]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 01:24:52.181993 oslogin_cache_refresh[1510]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 01:24:52.186031 google_oslogin_nss_cache[1510]: oslogin_cache_refresh[1510]: Refreshing group entry cache Aug 13 01:24:52.186031 google_oslogin_nss_cache[1510]: oslogin_cache_refresh[1510]: Failure getting groups, quitting Aug 13 01:24:52.186031 google_oslogin_nss_cache[1510]: oslogin_cache_refresh[1510]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 01:24:52.183531 oslogin_cache_refresh[1510]: Refreshing group entry cache Aug 13 01:24:52.184237 oslogin_cache_refresh[1510]: Failure getting groups, quitting Aug 13 01:24:52.186379 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 13 01:24:52.184246 oslogin_cache_refresh[1510]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 01:24:52.187866 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 13 01:24:52.196790 jq[1521]: true Aug 13 01:24:52.198945 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 01:24:52.199379 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 01:24:52.200581 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 01:24:52.200813 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 01:24:52.206716 extend-filesystems[1507]: Found /dev/sda6 Aug 13 01:24:52.219539 update_engine[1518]: I20250813 01:24:52.217959 1518 main.cc:92] Flatcar Update Engine starting Aug 13 01:24:52.224659 extend-filesystems[1507]: Found /dev/sda9 Aug 13 01:24:52.229803 (ntainerd)[1550]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 01:24:52.231119 tar[1529]: linux-amd64/helm Aug 13 01:24:52.238691 extend-filesystems[1507]: Checking size of /dev/sda9 Aug 13 01:24:52.265551 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 01:24:52.285439 jq[1548]: true Aug 13 01:24:52.300029 dbus-daemon[1504]: [system] SELinux support is enabled Aug 13 01:24:52.300179 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 01:24:52.305869 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 01:24:52.305909 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 01:24:52.307021 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 01:24:52.307048 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 01:24:52.320345 extend-filesystems[1507]: Resized partition /dev/sda9 Aug 13 01:24:52.329198 extend-filesystems[1570]: resize2fs 1.47.2 (1-Jan-2025) Aug 13 01:24:52.332433 coreos-metadata[1503]: Aug 13 01:24:52.332 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:24:52.334633 update_engine[1518]: I20250813 01:24:52.334575 1518 update_check_scheduler.cc:74] Next update check in 6m47s Aug 13 01:24:52.334760 systemd[1]: Started update-engine.service - Update Engine. Aug 13 01:24:52.367865 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 01:24:52.387380 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:24:52.388998 bash[1589]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:24:52.390063 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 01:24:52.395532 kernel: EDAC MC: Ver: 3.0.0 Aug 13 01:24:52.414563 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 555003 blocks Aug 13 01:24:52.418844 kernel: EXT4-fs (sda9): resized filesystem to 555003 Aug 13 01:24:52.420671 extend-filesystems[1570]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 01:24:52.420671 extend-filesystems[1570]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 01:24:52.420671 extend-filesystems[1570]: The filesystem on /dev/sda9 is now 555003 (4k) blocks long. Aug 13 01:24:52.429956 extend-filesystems[1507]: Resized filesystem in /dev/sda9 Aug 13 01:24:52.518923 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 01:24:52.519217 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 01:24:52.560752 systemd[1]: Starting sshkeys.service... Aug 13 01:24:52.589104 containerd[1550]: time="2025-08-13T01:24:52Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 13 01:24:52.615620 containerd[1550]: time="2025-08-13T01:24:52.598931536Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 13 01:24:52.637603 systemd-networkd[1466]: eth0: DHCPv4 address 172.233.222.9/24, gateway 172.233.222.1 acquired from 23.40.197.134 Aug 13 01:24:52.637750 dbus-daemon[1504]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1466 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 01:24:52.640654 systemd-timesyncd[1438]: Network configuration changed, trying to establish connection. Aug 13 01:24:52.644325 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 01:24:52.651232 containerd[1550]: time="2025-08-13T01:24:52.650830402Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.96µs" Aug 13 01:24:52.651232 containerd[1550]: time="2025-08-13T01:24:52.650861002Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 13 01:24:52.651232 containerd[1550]: time="2025-08-13T01:24:52.650879602Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 13 01:24:52.651232 containerd[1550]: time="2025-08-13T01:24:52.651028002Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 13 01:24:52.651232 containerd[1550]: time="2025-08-13T01:24:52.651043202Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 13 01:24:52.651232 containerd[1550]: time="2025-08-13T01:24:52.651068042Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 01:24:52.651232 containerd[1550]: time="2025-08-13T01:24:52.651134712Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 01:24:52.651232 containerd[1550]: time="2025-08-13T01:24:52.651145642Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 01:24:52.659419 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 01:24:52.664267 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 01:24:52.674732 containerd[1550]: time="2025-08-13T01:24:52.674705534Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 01:24:52.677424 containerd[1550]: time="2025-08-13T01:24:52.674820784Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 01:24:52.677424 containerd[1550]: time="2025-08-13T01:24:52.674848104Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 01:24:52.677424 containerd[1550]: time="2025-08-13T01:24:52.674857774Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 13 01:24:52.677424 containerd[1550]: time="2025-08-13T01:24:52.675119724Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 13 01:24:52.677424 containerd[1550]: time="2025-08-13T01:24:52.675362584Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 01:24:52.677424 containerd[1550]: time="2025-08-13T01:24:52.675393794Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 01:24:52.677424 containerd[1550]: time="2025-08-13T01:24:52.675402394Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 13 01:24:52.677424 containerd[1550]: time="2025-08-13T01:24:52.675425724Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 13 01:24:52.677424 containerd[1550]: time="2025-08-13T01:24:52.675619904Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 13 01:24:52.677424 containerd[1550]: time="2025-08-13T01:24:52.675683024Z" level=info msg="metadata content store policy set" policy=shared Aug 13 01:24:52.688779 containerd[1550]: time="2025-08-13T01:24:52.688747881Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 13 01:24:52.689897 containerd[1550]: time="2025-08-13T01:24:52.688881901Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 13 01:24:52.689897 containerd[1550]: time="2025-08-13T01:24:52.688947971Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 13 01:24:52.689897 containerd[1550]: time="2025-08-13T01:24:52.688963181Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 13 01:24:52.689897 containerd[1550]: time="2025-08-13T01:24:52.688978291Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 13 01:24:52.689897 containerd[1550]: time="2025-08-13T01:24:52.688987251Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 13 01:24:52.689897 containerd[1550]: time="2025-08-13T01:24:52.688996971Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 13 01:24:52.689897 containerd[1550]: time="2025-08-13T01:24:52.689007591Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 13 01:24:52.689897 containerd[1550]: time="2025-08-13T01:24:52.689035821Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 13 01:24:52.689897 containerd[1550]: time="2025-08-13T01:24:52.689044941Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 13 01:24:52.689897 containerd[1550]: time="2025-08-13T01:24:52.689052931Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 13 01:24:52.689897 containerd[1550]: time="2025-08-13T01:24:52.689064211Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 13 01:24:52.689897 containerd[1550]: time="2025-08-13T01:24:52.689212351Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 13 01:24:52.689897 containerd[1550]: time="2025-08-13T01:24:52.689232161Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 13 01:24:52.689897 containerd[1550]: time="2025-08-13T01:24:52.689245831Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 13 01:24:52.690094 containerd[1550]: time="2025-08-13T01:24:52.689279561Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 13 01:24:52.690094 containerd[1550]: time="2025-08-13T01:24:52.689290161Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 13 01:24:52.690094 containerd[1550]: time="2025-08-13T01:24:52.689299621Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 13 01:24:52.690094 containerd[1550]: time="2025-08-13T01:24:52.689308651Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 13 01:24:52.690094 containerd[1550]: time="2025-08-13T01:24:52.689317221Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 13 01:24:52.690094 containerd[1550]: time="2025-08-13T01:24:52.689326901Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 13 01:24:52.690094 containerd[1550]: time="2025-08-13T01:24:52.689359821Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 13 01:24:52.690094 containerd[1550]: time="2025-08-13T01:24:52.689370001Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 13 01:24:52.690094 containerd[1550]: time="2025-08-13T01:24:52.689437131Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 13 01:24:52.690094 containerd[1550]: time="2025-08-13T01:24:52.689451251Z" level=info msg="Start snapshots syncer" Aug 13 01:24:52.690094 containerd[1550]: time="2025-08-13T01:24:52.689476471Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 13 01:24:52.690292 containerd[1550]: time="2025-08-13T01:24:52.689855571Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 13 01:24:52.690414 containerd[1550]: time="2025-08-13T01:24:52.690401271Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 13 01:24:52.691608 containerd[1550]: time="2025-08-13T01:24:52.691552222Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 13 01:24:52.691740 containerd[1550]: time="2025-08-13T01:24:52.691725712Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 13 01:24:52.692596 containerd[1550]: time="2025-08-13T01:24:52.692533162Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 13 01:24:52.692596 containerd[1550]: time="2025-08-13T01:24:52.692546912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 13 01:24:52.692596 containerd[1550]: time="2025-08-13T01:24:52.692554552Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 13 01:24:52.692596 containerd[1550]: time="2025-08-13T01:24:52.692563362Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 13 01:24:52.692596 containerd[1550]: time="2025-08-13T01:24:52.692571602Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 13 01:24:52.692596 containerd[1550]: time="2025-08-13T01:24:52.692578952Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 13 01:24:52.694445 containerd[1550]: time="2025-08-13T01:24:52.692716833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 13 01:24:52.694445 containerd[1550]: time="2025-08-13T01:24:52.692732723Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 13 01:24:52.694445 containerd[1550]: time="2025-08-13T01:24:52.692740663Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 13 01:24:53.913304 containerd[1550]: time="2025-08-13T01:24:52.697646505Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 01:24:53.913304 containerd[1550]: time="2025-08-13T01:24:52.697667325Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 01:24:53.913304 containerd[1550]: time="2025-08-13T01:24:52.697675145Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 01:24:53.913304 containerd[1550]: time="2025-08-13T01:24:52.697722355Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 01:24:53.913304 containerd[1550]: time="2025-08-13T01:24:52.697730745Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 13 01:24:53.913304 containerd[1550]: time="2025-08-13T01:24:52.697738135Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 13 01:24:53.913304 containerd[1550]: time="2025-08-13T01:24:52.697746135Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 13 01:24:53.913304 containerd[1550]: time="2025-08-13T01:24:52.697759425Z" level=info msg="runtime interface created" Aug 13 01:24:53.913304 containerd[1550]: time="2025-08-13T01:24:52.697763755Z" level=info msg="created NRI interface" Aug 13 01:24:53.913304 containerd[1550]: time="2025-08-13T01:24:52.697769675Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 13 01:24:53.913304 containerd[1550]: time="2025-08-13T01:24:52.697779655Z" level=info msg="Connect containerd service" Aug 13 01:24:53.913304 containerd[1550]: time="2025-08-13T01:24:52.697815985Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 01:24:53.912670 systemd-timesyncd[1438]: Contacted time server 171.66.97.126:123 (2.flatcar.pool.ntp.org). Aug 13 01:24:53.912716 systemd-timesyncd[1438]: Initial clock synchronization to Wed 2025-08-13 01:24:53.912519 UTC. Aug 13 01:24:53.912760 systemd-resolved[1404]: Clock change detected. Flushing caches. Aug 13 01:24:53.919707 containerd[1550]: time="2025-08-13T01:24:53.919688792Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:24:53.936916 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:24:53.939380 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 01:24:53.967398 systemd-logind[1515]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 01:24:53.967426 systemd-logind[1515]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 01:24:53.992928 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 01:24:54.018496 systemd-logind[1515]: New seat seat0. Aug 13 01:24:54.043809 coreos-metadata[1601]: Aug 13 01:24:54.043 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:24:54.045049 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 01:24:54.047719 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:24:54.076685 locksmithd[1572]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 01:24:54.108664 containerd[1550]: time="2025-08-13T01:24:54.108520296Z" level=info msg="Start subscribing containerd event" Aug 13 01:24:54.108998 containerd[1550]: time="2025-08-13T01:24:54.108563906Z" level=info msg="Start recovering state" Aug 13 01:24:54.108998 containerd[1550]: time="2025-08-13T01:24:54.108880916Z" level=info msg="Start event monitor" Aug 13 01:24:54.108998 containerd[1550]: time="2025-08-13T01:24:54.108895266Z" level=info msg="Start cni network conf syncer for default" Aug 13 01:24:54.108998 containerd[1550]: time="2025-08-13T01:24:54.108901566Z" level=info msg="Start streaming server" Aug 13 01:24:54.108998 containerd[1550]: time="2025-08-13T01:24:54.108907926Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 13 01:24:54.108998 containerd[1550]: time="2025-08-13T01:24:54.108914936Z" level=info msg="runtime interface starting up..." Aug 13 01:24:54.108998 containerd[1550]: time="2025-08-13T01:24:54.108919736Z" level=info msg="starting plugins..." Aug 13 01:24:54.108998 containerd[1550]: time="2025-08-13T01:24:54.108931156Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 13 01:24:54.109606 containerd[1550]: time="2025-08-13T01:24:54.109591117Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 01:24:54.110063 containerd[1550]: time="2025-08-13T01:24:54.110050087Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 01:24:54.110657 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 01:24:54.111971 containerd[1550]: time="2025-08-13T01:24:54.111957568Z" level=info msg="containerd successfully booted in 0.309021s" Aug 13 01:24:54.164111 coreos-metadata[1601]: Aug 13 01:24:54.163 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Aug 13 01:24:54.219642 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 01:24:54.221219 dbus-daemon[1504]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 01:24:54.222186 dbus-daemon[1504]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1600 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 01:24:54.229124 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 01:24:54.260804 sshd_keygen[1556]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 01:24:54.279912 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 01:24:54.285007 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 01:24:54.303826 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 01:24:54.304046 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 01:24:54.307052 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 01:24:54.318113 coreos-metadata[1601]: Aug 13 01:24:54.318 INFO Fetch successful Aug 13 01:24:54.336264 polkitd[1630]: Started polkitd version 126 Aug 13 01:24:54.337734 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 01:24:54.342696 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 01:24:54.344919 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 01:24:54.346188 update-ssh-keys[1645]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:24:54.346013 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 01:24:54.347359 polkitd[1630]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 01:24:54.347801 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 01:24:54.347574 polkitd[1630]: Loading rules from directory /run/polkit-1/rules.d Aug 13 01:24:54.347604 polkitd[1630]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 01:24:54.347882 polkitd[1630]: Loading rules from directory /usr/local/share/polkit-1/rules.d Aug 13 01:24:54.347902 polkitd[1630]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 01:24:54.347930 polkitd[1630]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 01:24:54.348294 polkitd[1630]: Finished loading, compiling and executing 2 rules Aug 13 01:24:54.349501 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 01:24:54.350384 systemd[1]: Finished sshkeys.service. Aug 13 01:24:54.356612 dbus-daemon[1504]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 01:24:54.357012 polkitd[1630]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 01:24:54.364523 systemd-hostnamed[1600]: Hostname set to <172-233-222-9> (transient) Aug 13 01:24:54.364866 systemd-resolved[1404]: System hostname changed to '172-233-222-9'. Aug 13 01:24:54.398942 tar[1529]: linux-amd64/LICENSE Aug 13 01:24:54.399002 tar[1529]: linux-amd64/README.md Aug 13 01:24:54.413289 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 01:24:54.556057 coreos-metadata[1503]: Aug 13 01:24:54.556 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 01:24:54.660113 coreos-metadata[1503]: Aug 13 01:24:54.660 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Aug 13 01:24:54.875115 coreos-metadata[1503]: Aug 13 01:24:54.875 INFO Fetch successful Aug 13 01:24:54.875115 coreos-metadata[1503]: Aug 13 01:24:54.875 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Aug 13 01:24:55.175725 coreos-metadata[1503]: Aug 13 01:24:55.175 INFO Fetch successful Aug 13 01:24:55.245576 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 01:24:55.246560 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 01:24:55.298946 systemd-networkd[1466]: eth0: Gained IPv6LL Aug 13 01:24:55.300971 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 01:24:55.302459 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 01:24:55.304695 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:24:55.306932 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 01:24:55.332186 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 01:24:56.074838 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:24:56.076359 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 01:24:56.077090 systemd[1]: Startup finished in 2.431s (kernel) + 8.393s (initrd) + 4.876s (userspace) = 15.702s. Aug 13 01:24:56.077773 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:24:56.490603 kubelet[1700]: E0813 01:24:56.490463 1700 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:24:56.493441 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:24:56.493602 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:24:56.493930 systemd[1]: kubelet.service: Consumed 731ms CPU time, 265.5M memory peak. Aug 13 01:24:57.196833 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 01:24:57.198016 systemd[1]: Started sshd@0-172.233.222.9:22-147.75.109.163:54340.service - OpenSSH per-connection server daemon (147.75.109.163:54340). Aug 13 01:24:57.549298 sshd[1712]: Accepted publickey for core from 147.75.109.163 port 54340 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:24:57.551613 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:24:57.568533 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 01:24:57.569971 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 01:24:57.578424 systemd-logind[1515]: New session 1 of user core. Aug 13 01:24:57.590124 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 01:24:57.592823 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 01:24:57.606838 (systemd)[1716]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:24:57.608868 systemd-logind[1515]: New session c1 of user core. Aug 13 01:24:57.717753 systemd[1716]: Queued start job for default target default.target. Aug 13 01:24:57.724695 systemd[1716]: Created slice app.slice - User Application Slice. Aug 13 01:24:57.724717 systemd[1716]: Reached target paths.target - Paths. Aug 13 01:24:57.724748 systemd[1716]: Reached target timers.target - Timers. Aug 13 01:24:57.725856 systemd[1716]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 01:24:57.733215 systemd[1716]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 01:24:57.733254 systemd[1716]: Reached target sockets.target - Sockets. Aug 13 01:24:57.733283 systemd[1716]: Reached target basic.target - Basic System. Aug 13 01:24:57.733315 systemd[1716]: Reached target default.target - Main User Target. Aug 13 01:24:57.733339 systemd[1716]: Startup finished in 119ms. Aug 13 01:24:57.733662 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 01:24:57.735549 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 01:24:57.993071 systemd[1]: Started sshd@1-172.233.222.9:22-147.75.109.163:54346.service - OpenSSH per-connection server daemon (147.75.109.163:54346). Aug 13 01:24:58.327396 sshd[1727]: Accepted publickey for core from 147.75.109.163 port 54346 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:24:58.328905 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:24:58.334547 systemd-logind[1515]: New session 2 of user core. Aug 13 01:24:58.345945 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 01:24:58.578288 sshd[1729]: Connection closed by 147.75.109.163 port 54346 Aug 13 01:24:58.579165 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Aug 13 01:24:58.583103 systemd-logind[1515]: Session 2 logged out. Waiting for processes to exit. Aug 13 01:24:58.583686 systemd[1]: sshd@1-172.233.222.9:22-147.75.109.163:54346.service: Deactivated successfully. Aug 13 01:24:58.585059 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 01:24:58.586494 systemd-logind[1515]: Removed session 2. Aug 13 01:24:58.637932 systemd[1]: Started sshd@2-172.233.222.9:22-147.75.109.163:33914.service - OpenSSH per-connection server daemon (147.75.109.163:33914). Aug 13 01:24:58.980415 sshd[1735]: Accepted publickey for core from 147.75.109.163 port 33914 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:24:58.982266 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:24:58.991472 systemd-logind[1515]: New session 3 of user core. Aug 13 01:24:58.997902 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 01:24:59.223713 sshd[1737]: Connection closed by 147.75.109.163 port 33914 Aug 13 01:24:59.224939 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Aug 13 01:24:59.229417 systemd[1]: sshd@2-172.233.222.9:22-147.75.109.163:33914.service: Deactivated successfully. Aug 13 01:24:59.231164 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 01:24:59.232553 systemd-logind[1515]: Session 3 logged out. Waiting for processes to exit. Aug 13 01:24:59.234313 systemd-logind[1515]: Removed session 3. Aug 13 01:24:59.284914 systemd[1]: Started sshd@3-172.233.222.9:22-147.75.109.163:33916.service - OpenSSH per-connection server daemon (147.75.109.163:33916). Aug 13 01:24:59.619296 sshd[1743]: Accepted publickey for core from 147.75.109.163 port 33916 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:24:59.621531 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:24:59.627188 systemd-logind[1515]: New session 4 of user core. Aug 13 01:24:59.631920 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 01:24:59.863957 sshd[1745]: Connection closed by 147.75.109.163 port 33916 Aug 13 01:24:59.864700 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Aug 13 01:24:59.868512 systemd-logind[1515]: Session 4 logged out. Waiting for processes to exit. Aug 13 01:24:59.869102 systemd[1]: sshd@3-172.233.222.9:22-147.75.109.163:33916.service: Deactivated successfully. Aug 13 01:24:59.870704 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 01:24:59.872221 systemd-logind[1515]: Removed session 4. Aug 13 01:24:59.926155 systemd[1]: Started sshd@4-172.233.222.9:22-147.75.109.163:33924.service - OpenSSH per-connection server daemon (147.75.109.163:33924). Aug 13 01:25:00.276103 sshd[1751]: Accepted publickey for core from 147.75.109.163 port 33924 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:25:00.277612 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:25:00.281552 systemd-logind[1515]: New session 5 of user core. Aug 13 01:25:00.287878 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 01:25:00.485259 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 01:25:00.485568 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:25:00.503125 sudo[1754]: pam_unix(sudo:session): session closed for user root Aug 13 01:25:00.555097 sshd[1753]: Connection closed by 147.75.109.163 port 33924 Aug 13 01:25:00.556193 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Aug 13 01:25:00.561380 systemd-logind[1515]: Session 5 logged out. Waiting for processes to exit. Aug 13 01:25:00.562234 systemd[1]: sshd@4-172.233.222.9:22-147.75.109.163:33924.service: Deactivated successfully. Aug 13 01:25:00.564771 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 01:25:00.567318 systemd-logind[1515]: Removed session 5. Aug 13 01:25:00.617167 systemd[1]: Started sshd@5-172.233.222.9:22-147.75.109.163:33930.service - OpenSSH per-connection server daemon (147.75.109.163:33930). Aug 13 01:25:00.972261 sshd[1760]: Accepted publickey for core from 147.75.109.163 port 33930 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:25:00.973932 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:25:00.979207 systemd-logind[1515]: New session 6 of user core. Aug 13 01:25:00.988987 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 01:25:01.173269 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 01:25:01.173579 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:25:01.179061 sudo[1764]: pam_unix(sudo:session): session closed for user root Aug 13 01:25:01.184440 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 01:25:01.184721 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:25:01.194342 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:25:01.230111 augenrules[1786]: No rules Aug 13 01:25:01.231606 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:25:01.231903 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:25:01.232927 sudo[1763]: pam_unix(sudo:session): session closed for user root Aug 13 01:25:01.284616 sshd[1762]: Connection closed by 147.75.109.163 port 33930 Aug 13 01:25:01.284975 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Aug 13 01:25:01.287766 systemd[1]: sshd@5-172.233.222.9:22-147.75.109.163:33930.service: Deactivated successfully. Aug 13 01:25:01.289448 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 01:25:01.290193 systemd-logind[1515]: Session 6 logged out. Waiting for processes to exit. Aug 13 01:25:01.291681 systemd-logind[1515]: Removed session 6. Aug 13 01:25:01.348951 systemd[1]: Started sshd@6-172.233.222.9:22-147.75.109.163:33936.service - OpenSSH per-connection server daemon (147.75.109.163:33936). Aug 13 01:25:01.681549 sshd[1795]: Accepted publickey for core from 147.75.109.163 port 33936 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:25:01.683608 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:25:01.688827 systemd-logind[1515]: New session 7 of user core. Aug 13 01:25:01.704133 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 01:25:01.879656 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 01:25:01.879999 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:25:02.143767 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 01:25:02.157094 (dockerd)[1815]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 01:25:02.326132 dockerd[1815]: time="2025-08-13T01:25:02.326075302Z" level=info msg="Starting up" Aug 13 01:25:02.327341 dockerd[1815]: time="2025-08-13T01:25:02.327324113Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 13 01:25:02.356883 systemd[1]: var-lib-docker-metacopy\x2dcheck3472266584-merged.mount: Deactivated successfully. Aug 13 01:25:02.374122 dockerd[1815]: time="2025-08-13T01:25:02.374091786Z" level=info msg="Loading containers: start." Aug 13 01:25:02.384802 kernel: Initializing XFRM netlink socket Aug 13 01:25:02.566612 systemd-networkd[1466]: docker0: Link UP Aug 13 01:25:02.569567 dockerd[1815]: time="2025-08-13T01:25:02.569545194Z" level=info msg="Loading containers: done." Aug 13 01:25:02.583523 dockerd[1815]: time="2025-08-13T01:25:02.583484941Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 01:25:02.583641 dockerd[1815]: time="2025-08-13T01:25:02.583540501Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 13 01:25:02.583641 dockerd[1815]: time="2025-08-13T01:25:02.583628171Z" level=info msg="Initializing buildkit" Aug 13 01:25:02.598387 dockerd[1815]: time="2025-08-13T01:25:02.598369958Z" level=info msg="Completed buildkit initialization" Aug 13 01:25:02.603514 dockerd[1815]: time="2025-08-13T01:25:02.603479731Z" level=info msg="Daemon has completed initialization" Aug 13 01:25:02.603560 dockerd[1815]: time="2025-08-13T01:25:02.603526121Z" level=info msg="API listen on /run/docker.sock" Aug 13 01:25:02.603753 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 01:25:03.070776 containerd[1550]: time="2025-08-13T01:25:03.070747804Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 01:25:03.817655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3008941058.mount: Deactivated successfully. Aug 13 01:25:04.652068 containerd[1550]: time="2025-08-13T01:25:04.652001554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:04.653178 containerd[1550]: time="2025-08-13T01:25:04.652975075Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077759" Aug 13 01:25:04.654843 containerd[1550]: time="2025-08-13T01:25:04.653967315Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:04.658838 containerd[1550]: time="2025-08-13T01:25:04.658159457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:04.659830 containerd[1550]: time="2025-08-13T01:25:04.659782028Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 1.589002434s" Aug 13 01:25:04.659894 containerd[1550]: time="2025-08-13T01:25:04.659834798Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 01:25:04.660541 containerd[1550]: time="2025-08-13T01:25:04.660507269Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 01:25:05.964294 containerd[1550]: time="2025-08-13T01:25:05.964216360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:05.965428 containerd[1550]: time="2025-08-13T01:25:05.965388571Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713245" Aug 13 01:25:05.966827 containerd[1550]: time="2025-08-13T01:25:05.965939961Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:05.970805 containerd[1550]: time="2025-08-13T01:25:05.970374733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:05.972999 containerd[1550]: time="2025-08-13T01:25:05.972957044Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 1.312398305s" Aug 13 01:25:05.973042 containerd[1550]: time="2025-08-13T01:25:05.973002024Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 01:25:05.974115 containerd[1550]: time="2025-08-13T01:25:05.973967195Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 01:25:06.744129 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 01:25:06.747004 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:25:06.928940 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:25:06.938204 (kubelet)[2087]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:25:06.976811 kubelet[2087]: E0813 01:25:06.976732 2087 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:25:06.981477 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:25:06.981645 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:25:06.982695 systemd[1]: kubelet.service: Consumed 187ms CPU time, 111.1M memory peak. Aug 13 01:25:07.220862 containerd[1550]: time="2025-08-13T01:25:07.220554638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:07.221933 containerd[1550]: time="2025-08-13T01:25:07.221589948Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783700" Aug 13 01:25:07.222428 containerd[1550]: time="2025-08-13T01:25:07.222393589Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:07.224471 containerd[1550]: time="2025-08-13T01:25:07.224435040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:07.225757 containerd[1550]: time="2025-08-13T01:25:07.225691430Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 1.251451505s" Aug 13 01:25:07.225757 containerd[1550]: time="2025-08-13T01:25:07.225736150Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 01:25:07.226831 containerd[1550]: time="2025-08-13T01:25:07.226745951Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 01:25:08.369032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount26589755.mount: Deactivated successfully. Aug 13 01:25:08.712288 containerd[1550]: time="2025-08-13T01:25:08.712153863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:08.713559 containerd[1550]: time="2025-08-13T01:25:08.713531724Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383612" Aug 13 01:25:08.714119 containerd[1550]: time="2025-08-13T01:25:08.714059604Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:08.715485 containerd[1550]: time="2025-08-13T01:25:08.715464725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:08.716280 containerd[1550]: time="2025-08-13T01:25:08.715996305Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 1.489217744s" Aug 13 01:25:08.716280 containerd[1550]: time="2025-08-13T01:25:08.716028355Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 01:25:08.717141 containerd[1550]: time="2025-08-13T01:25:08.717096275Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:25:09.415159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3088164060.mount: Deactivated successfully. Aug 13 01:25:10.031683 containerd[1550]: time="2025-08-13T01:25:10.031626892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:10.032613 containerd[1550]: time="2025-08-13T01:25:10.032452833Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 01:25:10.033540 containerd[1550]: time="2025-08-13T01:25:10.033508943Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:10.036547 containerd[1550]: time="2025-08-13T01:25:10.036506135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:10.037756 containerd[1550]: time="2025-08-13T01:25:10.037546895Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.32042248s" Aug 13 01:25:10.037756 containerd[1550]: time="2025-08-13T01:25:10.037577765Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:25:10.038211 containerd[1550]: time="2025-08-13T01:25:10.038184106Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 01:25:10.691834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3626987614.mount: Deactivated successfully. Aug 13 01:25:10.695826 containerd[1550]: time="2025-08-13T01:25:10.695745384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:25:10.696414 containerd[1550]: time="2025-08-13T01:25:10.696389254Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 01:25:10.697697 containerd[1550]: time="2025-08-13T01:25:10.696708325Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:25:10.698164 containerd[1550]: time="2025-08-13T01:25:10.698137565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:25:10.698798 containerd[1550]: time="2025-08-13T01:25:10.698760376Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 660.54731ms" Aug 13 01:25:10.698854 containerd[1550]: time="2025-08-13T01:25:10.698841416Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 01:25:10.699290 containerd[1550]: time="2025-08-13T01:25:10.699275356Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 01:25:11.401583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount53070081.mount: Deactivated successfully. Aug 13 01:25:12.651062 containerd[1550]: time="2025-08-13T01:25:12.650984951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:12.652092 containerd[1550]: time="2025-08-13T01:25:12.651990862Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Aug 13 01:25:12.652589 containerd[1550]: time="2025-08-13T01:25:12.652559862Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:12.654887 containerd[1550]: time="2025-08-13T01:25:12.654857733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:12.655835 containerd[1550]: time="2025-08-13T01:25:12.655651663Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.956299547s" Aug 13 01:25:12.655835 containerd[1550]: time="2025-08-13T01:25:12.655681933Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 01:25:14.465199 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:25:14.465314 systemd[1]: kubelet.service: Consumed 187ms CPU time, 111.1M memory peak. Aug 13 01:25:14.467759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:25:14.490044 systemd[1]: Reload requested from client PID 2239 ('systemctl') (unit session-7.scope)... Aug 13 01:25:14.490060 systemd[1]: Reloading... Aug 13 01:25:14.612820 zram_generator::config[2289]: No configuration found. Aug 13 01:25:14.681254 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:25:14.765732 systemd[1]: Reloading finished in 275 ms. Aug 13 01:25:14.826168 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 01:25:14.826251 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 01:25:14.826475 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:25:14.826509 systemd[1]: kubelet.service: Consumed 114ms CPU time, 98.3M memory peak. Aug 13 01:25:14.827675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:25:14.969739 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:25:14.972754 (kubelet)[2337]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:25:15.006656 kubelet[2337]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:25:15.006656 kubelet[2337]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 01:25:15.006656 kubelet[2337]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:25:15.006978 kubelet[2337]: I0813 01:25:15.006697 2337 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:25:15.233049 kubelet[2337]: I0813 01:25:15.232755 2337 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 01:25:15.233049 kubelet[2337]: I0813 01:25:15.232779 2337 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:25:15.233049 kubelet[2337]: I0813 01:25:15.232959 2337 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 01:25:15.259390 kubelet[2337]: I0813 01:25:15.259260 2337 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:25:15.259591 kubelet[2337]: E0813 01:25:15.259576 2337 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.233.222.9:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.233.222.9:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:25:15.264371 kubelet[2337]: I0813 01:25:15.264351 2337 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 01:25:15.268496 kubelet[2337]: I0813 01:25:15.268472 2337 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:25:15.268566 kubelet[2337]: I0813 01:25:15.268554 2337 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 01:25:15.268671 kubelet[2337]: I0813 01:25:15.268649 2337 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:25:15.268780 kubelet[2337]: I0813 01:25:15.268668 2337 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-233-222-9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:25:15.268877 kubelet[2337]: I0813 01:25:15.268804 2337 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:25:15.268877 kubelet[2337]: I0813 01:25:15.268813 2337 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 01:25:15.268914 kubelet[2337]: I0813 01:25:15.268899 2337 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:25:15.271393 kubelet[2337]: I0813 01:25:15.271076 2337 kubelet.go:408] "Attempting to sync node with API server" Aug 13 01:25:15.271393 kubelet[2337]: I0813 01:25:15.271091 2337 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:25:15.271393 kubelet[2337]: I0813 01:25:15.271116 2337 kubelet.go:314] "Adding apiserver pod source" Aug 13 01:25:15.271393 kubelet[2337]: I0813 01:25:15.271130 2337 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:25:15.276251 kubelet[2337]: W0813 01:25:15.276212 2337 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.233.222.9:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-233-222-9&limit=500&resourceVersion=0": dial tcp 172.233.222.9:6443: connect: connection refused Aug 13 01:25:15.276342 kubelet[2337]: E0813 01:25:15.276328 2337 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.233.222.9:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-233-222-9&limit=500&resourceVersion=0\": dial tcp 172.233.222.9:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:25:15.278347 kubelet[2337]: I0813 01:25:15.278333 2337 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 01:25:15.278443 kubelet[2337]: W0813 01:25:15.278414 2337 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.233.222.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.233.222.9:6443: connect: connection refused Aug 13 01:25:15.278471 kubelet[2337]: E0813 01:25:15.278448 2337 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.233.222.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.233.222.9:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:25:15.278772 kubelet[2337]: I0813 01:25:15.278761 2337 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:25:15.278866 kubelet[2337]: W0813 01:25:15.278857 2337 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 01:25:15.281127 kubelet[2337]: I0813 01:25:15.281114 2337 server.go:1274] "Started kubelet" Aug 13 01:25:15.281672 kubelet[2337]: I0813 01:25:15.281638 2337 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:25:15.282500 kubelet[2337]: I0813 01:25:15.282237 2337 server.go:449] "Adding debug handlers to kubelet server" Aug 13 01:25:15.284625 kubelet[2337]: I0813 01:25:15.284604 2337 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:25:15.284767 kubelet[2337]: I0813 01:25:15.284735 2337 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:25:15.284864 kubelet[2337]: I0813 01:25:15.284853 2337 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:25:15.286160 kubelet[2337]: E0813 01:25:15.285285 2337 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.233.222.9:6443/api/v1/namespaces/default/events\": dial tcp 172.233.222.9:6443: connect: connection refused" event="&Event{ObjectMeta:{172-233-222-9.185b2f24e3fa4b41 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-233-222-9,UID:172-233-222-9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-233-222-9,},FirstTimestamp:2025-08-13 01:25:15.281099585 +0000 UTC m=+0.304998443,LastTimestamp:2025-08-13 01:25:15.281099585 +0000 UTC m=+0.304998443,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-233-222-9,}" Aug 13 01:25:15.287471 kubelet[2337]: I0813 01:25:15.287443 2337 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:25:15.289313 kubelet[2337]: E0813 01:25:15.289298 2337 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-233-222-9\" not found" Aug 13 01:25:15.289351 kubelet[2337]: I0813 01:25:15.289323 2337 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 01:25:15.289539 kubelet[2337]: I0813 01:25:15.289520 2337 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 01:25:15.289569 kubelet[2337]: I0813 01:25:15.289558 2337 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:25:15.290150 kubelet[2337]: W0813 01:25:15.289876 2337 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.233.222.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.233.222.9:6443: connect: connection refused Aug 13 01:25:15.290150 kubelet[2337]: E0813 01:25:15.289908 2337 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.233.222.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.233.222.9:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:25:15.290150 kubelet[2337]: E0813 01:25:15.289968 2337 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:25:15.290150 kubelet[2337]: I0813 01:25:15.290066 2337 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:25:15.290150 kubelet[2337]: I0813 01:25:15.290109 2337 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:25:15.290558 kubelet[2337]: E0813 01:25:15.290525 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.222.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-222-9?timeout=10s\": dial tcp 172.233.222.9:6443: connect: connection refused" interval="200ms" Aug 13 01:25:15.291122 kubelet[2337]: I0813 01:25:15.291104 2337 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:25:15.301813 kubelet[2337]: I0813 01:25:15.301726 2337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:25:15.302725 kubelet[2337]: I0813 01:25:15.302701 2337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:25:15.302725 kubelet[2337]: I0813 01:25:15.302719 2337 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 01:25:15.302776 kubelet[2337]: I0813 01:25:15.302731 2337 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 01:25:15.302776 kubelet[2337]: E0813 01:25:15.302761 2337 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:25:15.308195 kubelet[2337]: W0813 01:25:15.308147 2337 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.233.222.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.233.222.9:6443: connect: connection refused Aug 13 01:25:15.308195 kubelet[2337]: E0813 01:25:15.308175 2337 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.233.222.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.233.222.9:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:25:15.319339 kubelet[2337]: I0813 01:25:15.319322 2337 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 01:25:15.319339 kubelet[2337]: I0813 01:25:15.319332 2337 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 01:25:15.319420 kubelet[2337]: I0813 01:25:15.319355 2337 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:25:15.320822 kubelet[2337]: I0813 01:25:15.320810 2337 policy_none.go:49] "None policy: Start" Aug 13 01:25:15.321147 kubelet[2337]: I0813 01:25:15.321140 2337 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 01:25:15.321172 kubelet[2337]: I0813 01:25:15.321153 2337 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:25:15.327032 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 01:25:15.335520 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 01:25:15.338086 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 01:25:15.345805 kubelet[2337]: I0813 01:25:15.345480 2337 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:25:15.345805 kubelet[2337]: I0813 01:25:15.345636 2337 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:25:15.345805 kubelet[2337]: I0813 01:25:15.345645 2337 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:25:15.345805 kubelet[2337]: I0813 01:25:15.345769 2337 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:25:15.346951 kubelet[2337]: E0813 01:25:15.346900 2337 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-233-222-9\" not found" Aug 13 01:25:15.409926 systemd[1]: Created slice kubepods-burstable-pod57089a9ac37a492ab3e7a2d088e556c7.slice - libcontainer container kubepods-burstable-pod57089a9ac37a492ab3e7a2d088e556c7.slice. Aug 13 01:25:15.437350 systemd[1]: Created slice kubepods-burstable-pod7d47c2165ba88a6aa3b239bb73b4cf04.slice - libcontainer container kubepods-burstable-pod7d47c2165ba88a6aa3b239bb73b4cf04.slice. Aug 13 01:25:15.447146 kubelet[2337]: I0813 01:25:15.447125 2337 kubelet_node_status.go:72] "Attempting to register node" node="172-233-222-9" Aug 13 01:25:15.448296 kubelet[2337]: E0813 01:25:15.447348 2337 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.233.222.9:6443/api/v1/nodes\": dial tcp 172.233.222.9:6443: connect: connection refused" node="172-233-222-9" Aug 13 01:25:15.448191 systemd[1]: Created slice kubepods-burstable-podc7ba067858580e0a02b0f77127396568.slice - libcontainer container kubepods-burstable-podc7ba067858580e0a02b0f77127396568.slice. Aug 13 01:25:15.491461 kubelet[2337]: E0813 01:25:15.491394 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.222.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-222-9?timeout=10s\": dial tcp 172.233.222.9:6443: connect: connection refused" interval="400ms" Aug 13 01:25:15.590760 kubelet[2337]: I0813 01:25:15.590710 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7ba067858580e0a02b0f77127396568-kubeconfig\") pod \"kube-scheduler-172-233-222-9\" (UID: \"c7ba067858580e0a02b0f77127396568\") " pod="kube-system/kube-scheduler-172-233-222-9" Aug 13 01:25:15.590760 kubelet[2337]: I0813 01:25:15.590761 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7d47c2165ba88a6aa3b239bb73b4cf04-usr-share-ca-certificates\") pod \"kube-controller-manager-172-233-222-9\" (UID: \"7d47c2165ba88a6aa3b239bb73b4cf04\") " pod="kube-system/kube-controller-manager-172-233-222-9" Aug 13 01:25:15.590847 kubelet[2337]: I0813 01:25:15.590826 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7d47c2165ba88a6aa3b239bb73b4cf04-flexvolume-dir\") pod \"kube-controller-manager-172-233-222-9\" (UID: \"7d47c2165ba88a6aa3b239bb73b4cf04\") " pod="kube-system/kube-controller-manager-172-233-222-9" Aug 13 01:25:15.590847 kubelet[2337]: I0813 01:25:15.590842 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7d47c2165ba88a6aa3b239bb73b4cf04-k8s-certs\") pod \"kube-controller-manager-172-233-222-9\" (UID: \"7d47c2165ba88a6aa3b239bb73b4cf04\") " pod="kube-system/kube-controller-manager-172-233-222-9" Aug 13 01:25:15.590902 kubelet[2337]: I0813 01:25:15.590858 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7d47c2165ba88a6aa3b239bb73b4cf04-kubeconfig\") pod \"kube-controller-manager-172-233-222-9\" (UID: \"7d47c2165ba88a6aa3b239bb73b4cf04\") " pod="kube-system/kube-controller-manager-172-233-222-9" Aug 13 01:25:15.590902 kubelet[2337]: I0813 01:25:15.590874 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57089a9ac37a492ab3e7a2d088e556c7-ca-certs\") pod \"kube-apiserver-172-233-222-9\" (UID: \"57089a9ac37a492ab3e7a2d088e556c7\") " pod="kube-system/kube-apiserver-172-233-222-9" Aug 13 01:25:15.590902 kubelet[2337]: I0813 01:25:15.590887 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57089a9ac37a492ab3e7a2d088e556c7-k8s-certs\") pod \"kube-apiserver-172-233-222-9\" (UID: \"57089a9ac37a492ab3e7a2d088e556c7\") " pod="kube-system/kube-apiserver-172-233-222-9" Aug 13 01:25:15.590902 kubelet[2337]: I0813 01:25:15.590904 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57089a9ac37a492ab3e7a2d088e556c7-usr-share-ca-certificates\") pod \"kube-apiserver-172-233-222-9\" (UID: \"57089a9ac37a492ab3e7a2d088e556c7\") " pod="kube-system/kube-apiserver-172-233-222-9" Aug 13 01:25:15.590982 kubelet[2337]: I0813 01:25:15.590917 2337 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7d47c2165ba88a6aa3b239bb73b4cf04-ca-certs\") pod \"kube-controller-manager-172-233-222-9\" (UID: \"7d47c2165ba88a6aa3b239bb73b4cf04\") " pod="kube-system/kube-controller-manager-172-233-222-9" Aug 13 01:25:15.649547 kubelet[2337]: I0813 01:25:15.649529 2337 kubelet_node_status.go:72] "Attempting to register node" node="172-233-222-9" Aug 13 01:25:15.649892 kubelet[2337]: E0813 01:25:15.649874 2337 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.233.222.9:6443/api/v1/nodes\": dial tcp 172.233.222.9:6443: connect: connection refused" node="172-233-222-9" Aug 13 01:25:15.736058 kubelet[2337]: E0813 01:25:15.736007 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:15.736442 containerd[1550]: time="2025-08-13T01:25:15.736407143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-233-222-9,Uid:57089a9ac37a492ab3e7a2d088e556c7,Namespace:kube-system,Attempt:0,}" Aug 13 01:25:15.746812 kubelet[2337]: E0813 01:25:15.745812 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:15.747983 containerd[1550]: time="2025-08-13T01:25:15.747235048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-233-222-9,Uid:7d47c2165ba88a6aa3b239bb73b4cf04,Namespace:kube-system,Attempt:0,}" Aug 13 01:25:15.751148 kubelet[2337]: E0813 01:25:15.751128 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:15.754745 containerd[1550]: time="2025-08-13T01:25:15.754704152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-233-222-9,Uid:c7ba067858580e0a02b0f77127396568,Namespace:kube-system,Attempt:0,}" Aug 13 01:25:15.768535 containerd[1550]: time="2025-08-13T01:25:15.768475119Z" level=info msg="connecting to shim b1e4c9c1af11dec3c0956b506fe5793d688584c2f8922c730d7bc03d27e63c5c" address="unix:///run/containerd/s/f66d3aac4b024570ca1226e3544c194876c0eaa83bb232991c4f634911ab0c7c" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:25:15.804815 containerd[1550]: time="2025-08-13T01:25:15.803685216Z" level=info msg="connecting to shim 26cf5d563ef3713738c6a5b037438a329a15fdff2e15c9c100de6ba4dcb84361" address="unix:///run/containerd/s/82b4c538c70a2a9a786077cdaf23e527f96d624f77a50074ff55d1a768543f29" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:25:15.806840 containerd[1550]: time="2025-08-13T01:25:15.806808758Z" level=info msg="connecting to shim cde1a6b86b2923227367e578499d2f802d788d6f2b3740462bd335d95344e156" address="unix:///run/containerd/s/49378650f8fa21298cd59c5c5e564f85e47496c86a1e6e3cfd4e9289c7db603c" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:25:15.825930 systemd[1]: Started cri-containerd-b1e4c9c1af11dec3c0956b506fe5793d688584c2f8922c730d7bc03d27e63c5c.scope - libcontainer container b1e4c9c1af11dec3c0956b506fe5793d688584c2f8922c730d7bc03d27e63c5c. Aug 13 01:25:15.837972 systemd[1]: Started cri-containerd-cde1a6b86b2923227367e578499d2f802d788d6f2b3740462bd335d95344e156.scope - libcontainer container cde1a6b86b2923227367e578499d2f802d788d6f2b3740462bd335d95344e156. Aug 13 01:25:15.844026 systemd[1]: Started cri-containerd-26cf5d563ef3713738c6a5b037438a329a15fdff2e15c9c100de6ba4dcb84361.scope - libcontainer container 26cf5d563ef3713738c6a5b037438a329a15fdff2e15c9c100de6ba4dcb84361. Aug 13 01:25:15.891738 kubelet[2337]: E0813 01:25:15.891701 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.222.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-222-9?timeout=10s\": dial tcp 172.233.222.9:6443: connect: connection refused" interval="800ms" Aug 13 01:25:15.899186 containerd[1550]: time="2025-08-13T01:25:15.899091724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-233-222-9,Uid:7d47c2165ba88a6aa3b239bb73b4cf04,Namespace:kube-system,Attempt:0,} returns sandbox id \"cde1a6b86b2923227367e578499d2f802d788d6f2b3740462bd335d95344e156\"" Aug 13 01:25:15.899960 kubelet[2337]: E0813 01:25:15.899927 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:15.901854 containerd[1550]: time="2025-08-13T01:25:15.901827065Z" level=info msg="CreateContainer within sandbox \"cde1a6b86b2923227367e578499d2f802d788d6f2b3740462bd335d95344e156\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 01:25:15.909798 containerd[1550]: time="2025-08-13T01:25:15.909728399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-233-222-9,Uid:57089a9ac37a492ab3e7a2d088e556c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1e4c9c1af11dec3c0956b506fe5793d688584c2f8922c730d7bc03d27e63c5c\"" Aug 13 01:25:15.911185 containerd[1550]: time="2025-08-13T01:25:15.911130660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-233-222-9,Uid:c7ba067858580e0a02b0f77127396568,Namespace:kube-system,Attempt:0,} returns sandbox id \"26cf5d563ef3713738c6a5b037438a329a15fdff2e15c9c100de6ba4dcb84361\"" Aug 13 01:25:15.911583 kubelet[2337]: E0813 01:25:15.911571 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:15.912193 containerd[1550]: time="2025-08-13T01:25:15.912100470Z" level=info msg="Container 9c5cc4b5d88c4c218ae78d997e52330d639617716a94378a6864073de9def0ea: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:25:15.912237 kubelet[2337]: E0813 01:25:15.912121 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:15.913920 containerd[1550]: time="2025-08-13T01:25:15.913900101Z" level=info msg="CreateContainer within sandbox \"b1e4c9c1af11dec3c0956b506fe5793d688584c2f8922c730d7bc03d27e63c5c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 01:25:15.914806 containerd[1550]: time="2025-08-13T01:25:15.914712472Z" level=info msg="CreateContainer within sandbox \"26cf5d563ef3713738c6a5b037438a329a15fdff2e15c9c100de6ba4dcb84361\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 01:25:15.916794 containerd[1550]: time="2025-08-13T01:25:15.916764883Z" level=info msg="CreateContainer within sandbox \"cde1a6b86b2923227367e578499d2f802d788d6f2b3740462bd335d95344e156\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9c5cc4b5d88c4c218ae78d997e52330d639617716a94378a6864073de9def0ea\"" Aug 13 01:25:15.917742 containerd[1550]: time="2025-08-13T01:25:15.917467283Z" level=info msg="StartContainer for \"9c5cc4b5d88c4c218ae78d997e52330d639617716a94378a6864073de9def0ea\"" Aug 13 01:25:15.918301 containerd[1550]: time="2025-08-13T01:25:15.918283914Z" level=info msg="connecting to shim 9c5cc4b5d88c4c218ae78d997e52330d639617716a94378a6864073de9def0ea" address="unix:///run/containerd/s/49378650f8fa21298cd59c5c5e564f85e47496c86a1e6e3cfd4e9289c7db603c" protocol=ttrpc version=3 Aug 13 01:25:15.922586 containerd[1550]: time="2025-08-13T01:25:15.922564466Z" level=info msg="Container f72cc72f6e7d439f5afc1dfe36783ed30929f906b589658b5e88e2eace1a4ab8: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:25:15.923948 containerd[1550]: time="2025-08-13T01:25:15.923928746Z" level=info msg="Container 188e7bcfb7ecf8d6df0c8641e0d9cea267a47cf0698becf5ce7e9d68550634ef: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:25:15.928128 containerd[1550]: time="2025-08-13T01:25:15.927962018Z" level=info msg="CreateContainer within sandbox \"b1e4c9c1af11dec3c0956b506fe5793d688584c2f8922c730d7bc03d27e63c5c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f72cc72f6e7d439f5afc1dfe36783ed30929f906b589658b5e88e2eace1a4ab8\"" Aug 13 01:25:15.928402 containerd[1550]: time="2025-08-13T01:25:15.928372799Z" level=info msg="StartContainer for \"f72cc72f6e7d439f5afc1dfe36783ed30929f906b589658b5e88e2eace1a4ab8\"" Aug 13 01:25:15.929578 containerd[1550]: time="2025-08-13T01:25:15.929453579Z" level=info msg="connecting to shim f72cc72f6e7d439f5afc1dfe36783ed30929f906b589658b5e88e2eace1a4ab8" address="unix:///run/containerd/s/f66d3aac4b024570ca1226e3544c194876c0eaa83bb232991c4f634911ab0c7c" protocol=ttrpc version=3 Aug 13 01:25:15.930522 containerd[1550]: time="2025-08-13T01:25:15.930497760Z" level=info msg="CreateContainer within sandbox \"26cf5d563ef3713738c6a5b037438a329a15fdff2e15c9c100de6ba4dcb84361\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"188e7bcfb7ecf8d6df0c8641e0d9cea267a47cf0698becf5ce7e9d68550634ef\"" Aug 13 01:25:15.931021 containerd[1550]: time="2025-08-13T01:25:15.930993430Z" level=info msg="StartContainer for \"188e7bcfb7ecf8d6df0c8641e0d9cea267a47cf0698becf5ce7e9d68550634ef\"" Aug 13 01:25:15.933281 containerd[1550]: time="2025-08-13T01:25:15.933257521Z" level=info msg="connecting to shim 188e7bcfb7ecf8d6df0c8641e0d9cea267a47cf0698becf5ce7e9d68550634ef" address="unix:///run/containerd/s/82b4c538c70a2a9a786077cdaf23e527f96d624f77a50074ff55d1a768543f29" protocol=ttrpc version=3 Aug 13 01:25:15.935957 systemd[1]: Started cri-containerd-9c5cc4b5d88c4c218ae78d997e52330d639617716a94378a6864073de9def0ea.scope - libcontainer container 9c5cc4b5d88c4c218ae78d997e52330d639617716a94378a6864073de9def0ea. Aug 13 01:25:15.960984 systemd[1]: Started cri-containerd-188e7bcfb7ecf8d6df0c8641e0d9cea267a47cf0698becf5ce7e9d68550634ef.scope - libcontainer container 188e7bcfb7ecf8d6df0c8641e0d9cea267a47cf0698becf5ce7e9d68550634ef. Aug 13 01:25:15.964159 systemd[1]: Started cri-containerd-f72cc72f6e7d439f5afc1dfe36783ed30929f906b589658b5e88e2eace1a4ab8.scope - libcontainer container f72cc72f6e7d439f5afc1dfe36783ed30929f906b589658b5e88e2eace1a4ab8. Aug 13 01:25:16.022568 containerd[1550]: time="2025-08-13T01:25:16.022393776Z" level=info msg="StartContainer for \"f72cc72f6e7d439f5afc1dfe36783ed30929f906b589658b5e88e2eace1a4ab8\" returns successfully" Aug 13 01:25:16.026962 containerd[1550]: time="2025-08-13T01:25:16.026927738Z" level=info msg="StartContainer for \"9c5cc4b5d88c4c218ae78d997e52330d639617716a94378a6864073de9def0ea\" returns successfully" Aug 13 01:25:16.036394 containerd[1550]: time="2025-08-13T01:25:16.036352573Z" level=info msg="StartContainer for \"188e7bcfb7ecf8d6df0c8641e0d9cea267a47cf0698becf5ce7e9d68550634ef\" returns successfully" Aug 13 01:25:16.052577 kubelet[2337]: I0813 01:25:16.052545 2337 kubelet_node_status.go:72] "Attempting to register node" node="172-233-222-9" Aug 13 01:25:16.054045 kubelet[2337]: E0813 01:25:16.053979 2337 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.233.222.9:6443/api/v1/nodes\": dial tcp 172.233.222.9:6443: connect: connection refused" node="172-233-222-9" Aug 13 01:25:16.321192 kubelet[2337]: E0813 01:25:16.321091 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:16.324643 kubelet[2337]: E0813 01:25:16.324619 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:16.327451 kubelet[2337]: E0813 01:25:16.327429 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:16.857021 kubelet[2337]: I0813 01:25:16.856982 2337 kubelet_node_status.go:72] "Attempting to register node" node="172-233-222-9" Aug 13 01:25:17.012689 kubelet[2337]: E0813 01:25:17.012634 2337 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-233-222-9\" not found" node="172-233-222-9" Aug 13 01:25:17.165271 kubelet[2337]: I0813 01:25:17.163914 2337 kubelet_node_status.go:75] "Successfully registered node" node="172-233-222-9" Aug 13 01:25:17.277350 kubelet[2337]: I0813 01:25:17.277167 2337 apiserver.go:52] "Watching apiserver" Aug 13 01:25:17.289614 kubelet[2337]: I0813 01:25:17.289588 2337 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 01:25:17.330349 kubelet[2337]: E0813 01:25:17.330326 2337 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-172-233-222-9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-233-222-9" Aug 13 01:25:17.330457 kubelet[2337]: E0813 01:25:17.330438 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:18.915368 systemd[1]: Reload requested from client PID 2607 ('systemctl') (unit session-7.scope)... Aug 13 01:25:18.915383 systemd[1]: Reloading... Aug 13 01:25:18.987856 zram_generator::config[2651]: No configuration found. Aug 13 01:25:19.062651 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:25:19.157309 systemd[1]: Reloading finished in 241 ms. Aug 13 01:25:19.189328 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:25:19.205654 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 01:25:19.205898 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:25:19.205940 systemd[1]: kubelet.service: Consumed 595ms CPU time, 130.4M memory peak. Aug 13 01:25:19.207426 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:25:19.362690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:25:19.369234 (kubelet)[2702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:25:19.396055 kubelet[2702]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:25:19.396352 kubelet[2702]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 01:25:19.396387 kubelet[2702]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:25:19.396553 kubelet[2702]: I0813 01:25:19.396470 2702 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:25:19.402212 kubelet[2702]: I0813 01:25:19.402197 2702 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 01:25:19.402283 kubelet[2702]: I0813 01:25:19.402274 2702 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:25:19.402446 kubelet[2702]: I0813 01:25:19.402435 2702 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 01:25:19.403545 kubelet[2702]: I0813 01:25:19.403527 2702 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 01:25:19.405413 kubelet[2702]: I0813 01:25:19.405401 2702 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:25:19.408250 kubelet[2702]: I0813 01:25:19.408239 2702 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 01:25:19.411178 kubelet[2702]: I0813 01:25:19.411157 2702 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:25:19.411327 kubelet[2702]: I0813 01:25:19.411317 2702 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 01:25:19.411500 kubelet[2702]: I0813 01:25:19.411481 2702 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:25:19.411634 kubelet[2702]: I0813 01:25:19.411532 2702 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-233-222-9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:25:19.411731 kubelet[2702]: I0813 01:25:19.411722 2702 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:25:19.411774 kubelet[2702]: I0813 01:25:19.411767 2702 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 01:25:19.411844 kubelet[2702]: I0813 01:25:19.411836 2702 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:25:19.411995 kubelet[2702]: I0813 01:25:19.411947 2702 kubelet.go:408] "Attempting to sync node with API server" Aug 13 01:25:19.411995 kubelet[2702]: I0813 01:25:19.411960 2702 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:25:19.413589 kubelet[2702]: I0813 01:25:19.411981 2702 kubelet.go:314] "Adding apiserver pod source" Aug 13 01:25:19.413629 kubelet[2702]: I0813 01:25:19.413594 2702 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:25:19.415050 kubelet[2702]: I0813 01:25:19.415037 2702 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 01:25:19.415273 kubelet[2702]: I0813 01:25:19.415261 2702 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:25:19.416838 kubelet[2702]: I0813 01:25:19.415517 2702 server.go:1274] "Started kubelet" Aug 13 01:25:19.416838 kubelet[2702]: I0813 01:25:19.415646 2702 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:25:19.416838 kubelet[2702]: I0813 01:25:19.415658 2702 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:25:19.416838 kubelet[2702]: I0813 01:25:19.416213 2702 server.go:449] "Adding debug handlers to kubelet server" Aug 13 01:25:19.417693 kubelet[2702]: I0813 01:25:19.417681 2702 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:25:19.421135 kubelet[2702]: I0813 01:25:19.418173 2702 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:25:19.421241 kubelet[2702]: I0813 01:25:19.421231 2702 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 01:25:19.422748 kubelet[2702]: I0813 01:25:19.421408 2702 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 01:25:19.422878 kubelet[2702]: I0813 01:25:19.422869 2702 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:25:19.422900 kubelet[2702]: I0813 01:25:19.418520 2702 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:25:19.422917 kubelet[2702]: E0813 01:25:19.421499 2702 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-233-222-9\" not found" Aug 13 01:25:19.425452 kubelet[2702]: I0813 01:25:19.425439 2702 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:25:19.434842 kubelet[2702]: I0813 01:25:19.434824 2702 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:25:19.439628 kubelet[2702]: I0813 01:25:19.439596 2702 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:25:19.446918 kubelet[2702]: I0813 01:25:19.446313 2702 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:25:19.447826 kubelet[2702]: I0813 01:25:19.447811 2702 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:25:19.447852 kubelet[2702]: I0813 01:25:19.447827 2702 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 01:25:19.447852 kubelet[2702]: I0813 01:25:19.447839 2702 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 01:25:19.447906 kubelet[2702]: E0813 01:25:19.447888 2702 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:25:19.476152 kubelet[2702]: I0813 01:25:19.476136 2702 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 01:25:19.476234 kubelet[2702]: I0813 01:25:19.476225 2702 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 01:25:19.476285 kubelet[2702]: I0813 01:25:19.476278 2702 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:25:19.476414 kubelet[2702]: I0813 01:25:19.476403 2702 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 01:25:19.476473 kubelet[2702]: I0813 01:25:19.476451 2702 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 01:25:19.476507 kubelet[2702]: I0813 01:25:19.476500 2702 policy_none.go:49] "None policy: Start" Aug 13 01:25:19.477019 kubelet[2702]: I0813 01:25:19.477008 2702 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 01:25:19.477129 kubelet[2702]: I0813 01:25:19.477122 2702 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:25:19.477300 kubelet[2702]: I0813 01:25:19.477291 2702 state_mem.go:75] "Updated machine memory state" Aug 13 01:25:19.481015 kubelet[2702]: I0813 01:25:19.481004 2702 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:25:19.481194 kubelet[2702]: I0813 01:25:19.481184 2702 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:25:19.481254 kubelet[2702]: I0813 01:25:19.481235 2702 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:25:19.481407 kubelet[2702]: I0813 01:25:19.481397 2702 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:25:19.584174 kubelet[2702]: I0813 01:25:19.584158 2702 kubelet_node_status.go:72] "Attempting to register node" node="172-233-222-9" Aug 13 01:25:19.588980 kubelet[2702]: I0813 01:25:19.588966 2702 kubelet_node_status.go:111] "Node was previously registered" node="172-233-222-9" Aug 13 01:25:19.589087 kubelet[2702]: I0813 01:25:19.589078 2702 kubelet_node_status.go:75] "Successfully registered node" node="172-233-222-9" Aug 13 01:25:19.724752 kubelet[2702]: I0813 01:25:19.724655 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7ba067858580e0a02b0f77127396568-kubeconfig\") pod \"kube-scheduler-172-233-222-9\" (UID: \"c7ba067858580e0a02b0f77127396568\") " pod="kube-system/kube-scheduler-172-233-222-9" Aug 13 01:25:19.724752 kubelet[2702]: I0813 01:25:19.724694 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57089a9ac37a492ab3e7a2d088e556c7-ca-certs\") pod \"kube-apiserver-172-233-222-9\" (UID: \"57089a9ac37a492ab3e7a2d088e556c7\") " pod="kube-system/kube-apiserver-172-233-222-9" Aug 13 01:25:19.724752 kubelet[2702]: I0813 01:25:19.724713 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57089a9ac37a492ab3e7a2d088e556c7-k8s-certs\") pod \"kube-apiserver-172-233-222-9\" (UID: \"57089a9ac37a492ab3e7a2d088e556c7\") " pod="kube-system/kube-apiserver-172-233-222-9" Aug 13 01:25:19.724752 kubelet[2702]: I0813 01:25:19.724731 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57089a9ac37a492ab3e7a2d088e556c7-usr-share-ca-certificates\") pod \"kube-apiserver-172-233-222-9\" (UID: \"57089a9ac37a492ab3e7a2d088e556c7\") " pod="kube-system/kube-apiserver-172-233-222-9" Aug 13 01:25:19.724977 kubelet[2702]: I0813 01:25:19.724757 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7d47c2165ba88a6aa3b239bb73b4cf04-flexvolume-dir\") pod \"kube-controller-manager-172-233-222-9\" (UID: \"7d47c2165ba88a6aa3b239bb73b4cf04\") " pod="kube-system/kube-controller-manager-172-233-222-9" Aug 13 01:25:19.724977 kubelet[2702]: I0813 01:25:19.724774 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7d47c2165ba88a6aa3b239bb73b4cf04-k8s-certs\") pod \"kube-controller-manager-172-233-222-9\" (UID: \"7d47c2165ba88a6aa3b239bb73b4cf04\") " pod="kube-system/kube-controller-manager-172-233-222-9" Aug 13 01:25:19.724977 kubelet[2702]: I0813 01:25:19.724834 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7d47c2165ba88a6aa3b239bb73b4cf04-usr-share-ca-certificates\") pod \"kube-controller-manager-172-233-222-9\" (UID: \"7d47c2165ba88a6aa3b239bb73b4cf04\") " pod="kube-system/kube-controller-manager-172-233-222-9" Aug 13 01:25:19.724977 kubelet[2702]: I0813 01:25:19.724860 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7d47c2165ba88a6aa3b239bb73b4cf04-ca-certs\") pod \"kube-controller-manager-172-233-222-9\" (UID: \"7d47c2165ba88a6aa3b239bb73b4cf04\") " pod="kube-system/kube-controller-manager-172-233-222-9" Aug 13 01:25:19.724977 kubelet[2702]: I0813 01:25:19.724878 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7d47c2165ba88a6aa3b239bb73b4cf04-kubeconfig\") pod \"kube-controller-manager-172-233-222-9\" (UID: \"7d47c2165ba88a6aa3b239bb73b4cf04\") " pod="kube-system/kube-controller-manager-172-233-222-9" Aug 13 01:25:19.853288 kubelet[2702]: E0813 01:25:19.853236 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:19.853288 kubelet[2702]: E0813 01:25:19.853473 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:19.853288 kubelet[2702]: E0813 01:25:19.853582 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:19.918511 sudo[2736]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 01:25:19.919027 sudo[2736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 01:25:20.351188 sudo[2736]: pam_unix(sudo:session): session closed for user root Aug 13 01:25:20.414672 kubelet[2702]: I0813 01:25:20.414430 2702 apiserver.go:52] "Watching apiserver" Aug 13 01:25:20.423721 kubelet[2702]: I0813 01:25:20.423676 2702 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 01:25:20.458247 kubelet[2702]: E0813 01:25:20.456515 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:20.458247 kubelet[2702]: E0813 01:25:20.457130 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:20.465210 kubelet[2702]: E0813 01:25:20.465196 2702 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-172-233-222-9\" already exists" pod="kube-system/kube-apiserver-172-233-222-9" Aug 13 01:25:20.465376 kubelet[2702]: E0813 01:25:20.465365 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:20.488645 kubelet[2702]: I0813 01:25:20.488381 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-233-222-9" podStartSLOduration=1.488372837 podStartE2EDuration="1.488372837s" podCreationTimestamp="2025-08-13 01:25:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:25:20.488200417 +0000 UTC m=+1.115435688" watchObservedRunningTime="2025-08-13 01:25:20.488372837 +0000 UTC m=+1.115608108" Aug 13 01:25:20.489279 kubelet[2702]: I0813 01:25:20.489178 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-233-222-9" podStartSLOduration=1.489173257 podStartE2EDuration="1.489173257s" podCreationTimestamp="2025-08-13 01:25:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:25:20.481248943 +0000 UTC m=+1.108484214" watchObservedRunningTime="2025-08-13 01:25:20.489173257 +0000 UTC m=+1.116408538" Aug 13 01:25:21.459382 kubelet[2702]: E0813 01:25:21.458598 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:21.535782 sudo[1798]: pam_unix(sudo:session): session closed for user root Aug 13 01:25:21.586849 sshd[1797]: Connection closed by 147.75.109.163 port 33936 Aug 13 01:25:21.587546 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Aug 13 01:25:21.591153 systemd[1]: sshd@6-172.233.222.9:22-147.75.109.163:33936.service: Deactivated successfully. Aug 13 01:25:21.594505 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 01:25:21.594894 systemd[1]: session-7.scope: Consumed 3.383s CPU time, 267.9M memory peak. Aug 13 01:25:21.599600 systemd-logind[1515]: Session 7 logged out. Waiting for processes to exit. Aug 13 01:25:21.601061 systemd-logind[1515]: Removed session 7. Aug 13 01:25:24.394006 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 01:25:24.756291 kubelet[2702]: I0813 01:25:24.756140 2702 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 01:25:24.756612 containerd[1550]: time="2025-08-13T01:25:24.756588110Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 01:25:24.756887 kubelet[2702]: I0813 01:25:24.756873 2702 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 01:25:25.704625 kubelet[2702]: I0813 01:25:25.704560 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-233-222-9" podStartSLOduration=6.704544583 podStartE2EDuration="6.704544583s" podCreationTimestamp="2025-08-13 01:25:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:25:20.49451916 +0000 UTC m=+1.121754431" watchObservedRunningTime="2025-08-13 01:25:25.704544583 +0000 UTC m=+6.331779864" Aug 13 01:25:25.714136 systemd[1]: Created slice kubepods-besteffort-pod1f851959_37dd_4e34_8e80_47231ab275e8.slice - libcontainer container kubepods-besteffort-pod1f851959_37dd_4e34_8e80_47231ab275e8.slice. Aug 13 01:25:25.758137 systemd[1]: Created slice kubepods-burstable-pod025e24fb_7026_4e6f_b2f2_17d07d390180.slice - libcontainer container kubepods-burstable-pod025e24fb_7026_4e6f_b2f2_17d07d390180.slice. Aug 13 01:25:25.763258 kubelet[2702]: I0813 01:25:25.763215 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1f851959-37dd-4e34-8e80-47231ab275e8-kube-proxy\") pod \"kube-proxy-jj7lz\" (UID: \"1f851959-37dd-4e34-8e80-47231ab275e8\") " pod="kube-system/kube-proxy-jj7lz" Aug 13 01:25:25.763258 kubelet[2702]: I0813 01:25:25.763251 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f851959-37dd-4e34-8e80-47231ab275e8-lib-modules\") pod \"kube-proxy-jj7lz\" (UID: \"1f851959-37dd-4e34-8e80-47231ab275e8\") " pod="kube-system/kube-proxy-jj7lz" Aug 13 01:25:25.763562 kubelet[2702]: I0813 01:25:25.763266 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-cilium-run\") pod \"cilium-7n2kj\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " pod="kube-system/cilium-7n2kj" Aug 13 01:25:25.763562 kubelet[2702]: I0813 01:25:25.763282 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-bpf-maps\") pod \"cilium-7n2kj\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " pod="kube-system/cilium-7n2kj" Aug 13 01:25:25.763562 kubelet[2702]: I0813 01:25:25.763297 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/025e24fb-7026-4e6f-b2f2-17d07d390180-cilium-config-path\") pod \"cilium-7n2kj\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " pod="kube-system/cilium-7n2kj" Aug 13 01:25:25.763562 kubelet[2702]: I0813 01:25:25.763310 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-lib-modules\") pod \"cilium-7n2kj\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " pod="kube-system/cilium-7n2kj" Aug 13 01:25:25.763562 kubelet[2702]: I0813 01:25:25.763323 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-cilium-cgroup\") pod \"cilium-7n2kj\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " pod="kube-system/cilium-7n2kj" Aug 13 01:25:25.763562 kubelet[2702]: I0813 01:25:25.763337 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f851959-37dd-4e34-8e80-47231ab275e8-xtables-lock\") pod \"kube-proxy-jj7lz\" (UID: \"1f851959-37dd-4e34-8e80-47231ab275e8\") " pod="kube-system/kube-proxy-jj7lz" Aug 13 01:25:25.763690 kubelet[2702]: I0813 01:25:25.763352 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-host-proc-sys-kernel\") pod \"cilium-7n2kj\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " pod="kube-system/cilium-7n2kj" Aug 13 01:25:25.763690 kubelet[2702]: I0813 01:25:25.763365 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-hostproc\") pod \"cilium-7n2kj\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " pod="kube-system/cilium-7n2kj" Aug 13 01:25:25.763690 kubelet[2702]: I0813 01:25:25.763379 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-etc-cni-netd\") pod \"cilium-7n2kj\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " pod="kube-system/cilium-7n2kj" Aug 13 01:25:25.763690 kubelet[2702]: I0813 01:25:25.763391 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwmfd\" (UniqueName: \"kubernetes.io/projected/025e24fb-7026-4e6f-b2f2-17d07d390180-kube-api-access-rwmfd\") pod \"cilium-7n2kj\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " pod="kube-system/cilium-7n2kj" Aug 13 01:25:25.763690 kubelet[2702]: I0813 01:25:25.763406 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/025e24fb-7026-4e6f-b2f2-17d07d390180-hubble-tls\") pod \"cilium-7n2kj\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " pod="kube-system/cilium-7n2kj" Aug 13 01:25:25.763690 kubelet[2702]: I0813 01:25:25.763421 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/025e24fb-7026-4e6f-b2f2-17d07d390180-clustermesh-secrets\") pod \"cilium-7n2kj\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " pod="kube-system/cilium-7n2kj" Aug 13 01:25:25.764208 kubelet[2702]: I0813 01:25:25.763435 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4f8s\" (UniqueName: \"kubernetes.io/projected/1f851959-37dd-4e34-8e80-47231ab275e8-kube-api-access-l4f8s\") pod \"kube-proxy-jj7lz\" (UID: \"1f851959-37dd-4e34-8e80-47231ab275e8\") " pod="kube-system/kube-proxy-jj7lz" Aug 13 01:25:25.764208 kubelet[2702]: I0813 01:25:25.763447 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-cni-path\") pod \"cilium-7n2kj\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " pod="kube-system/cilium-7n2kj" Aug 13 01:25:25.764208 kubelet[2702]: I0813 01:25:25.763462 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-xtables-lock\") pod \"cilium-7n2kj\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " pod="kube-system/cilium-7n2kj" Aug 13 01:25:25.764208 kubelet[2702]: I0813 01:25:25.763476 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-host-proc-sys-net\") pod \"cilium-7n2kj\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " pod="kube-system/cilium-7n2kj" Aug 13 01:25:25.903565 systemd[1]: Created slice kubepods-besteffort-pod26a9811f_ff63_41d6_b219_d1ffa4ebedec.slice - libcontainer container kubepods-besteffort-pod26a9811f_ff63_41d6_b219_d1ffa4ebedec.slice. Aug 13 01:25:25.964966 kubelet[2702]: I0813 01:25:25.964781 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws7pb\" (UniqueName: \"kubernetes.io/projected/26a9811f-ff63-41d6-b219-d1ffa4ebedec-kube-api-access-ws7pb\") pod \"cilium-operator-5d85765b45-2bjcc\" (UID: \"26a9811f-ff63-41d6-b219-d1ffa4ebedec\") " pod="kube-system/cilium-operator-5d85765b45-2bjcc" Aug 13 01:25:25.964966 kubelet[2702]: I0813 01:25:25.964855 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26a9811f-ff63-41d6-b219-d1ffa4ebedec-cilium-config-path\") pod \"cilium-operator-5d85765b45-2bjcc\" (UID: \"26a9811f-ff63-41d6-b219-d1ffa4ebedec\") " pod="kube-system/cilium-operator-5d85765b45-2bjcc" Aug 13 01:25:26.023927 kubelet[2702]: E0813 01:25:26.023900 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:26.024549 containerd[1550]: time="2025-08-13T01:25:26.024525473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jj7lz,Uid:1f851959-37dd-4e34-8e80-47231ab275e8,Namespace:kube-system,Attempt:0,}" Aug 13 01:25:26.041630 containerd[1550]: time="2025-08-13T01:25:26.041344962Z" level=info msg="connecting to shim 4634859afd0297fdd9a1630912eabed6e072606ce00f7ccc8753d69a66504c55" address="unix:///run/containerd/s/b139ee016514e5bb81d7b283fc81a91d8c34bc09cf7fc6b5d92967544d46189a" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:25:26.063394 kubelet[2702]: E0813 01:25:26.063373 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:26.063961 containerd[1550]: time="2025-08-13T01:25:26.063934693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7n2kj,Uid:025e24fb-7026-4e6f-b2f2-17d07d390180,Namespace:kube-system,Attempt:0,}" Aug 13 01:25:26.064910 systemd[1]: Started cri-containerd-4634859afd0297fdd9a1630912eabed6e072606ce00f7ccc8753d69a66504c55.scope - libcontainer container 4634859afd0297fdd9a1630912eabed6e072606ce00f7ccc8753d69a66504c55. Aug 13 01:25:26.091208 containerd[1550]: time="2025-08-13T01:25:26.091084446Z" level=info msg="connecting to shim dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94" address="unix:///run/containerd/s/2ea178bf9a92b82a9216d9b066cecf8cffd5704c76af47439fc173dfbf18c433" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:25:26.094090 containerd[1550]: time="2025-08-13T01:25:26.094028848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jj7lz,Uid:1f851959-37dd-4e34-8e80-47231ab275e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"4634859afd0297fdd9a1630912eabed6e072606ce00f7ccc8753d69a66504c55\"" Aug 13 01:25:26.094936 kubelet[2702]: E0813 01:25:26.094900 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:26.097370 containerd[1550]: time="2025-08-13T01:25:26.097340330Z" level=info msg="CreateContainer within sandbox \"4634859afd0297fdd9a1630912eabed6e072606ce00f7ccc8753d69a66504c55\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 01:25:26.106068 containerd[1550]: time="2025-08-13T01:25:26.106043554Z" level=info msg="Container 3485339ca1f1baaec5fb5c4b15e045c740a34b56e27b552bfe27064d6b74fd84: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:25:26.119896 systemd[1]: Started cri-containerd-dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94.scope - libcontainer container dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94. Aug 13 01:25:26.128844 containerd[1550]: time="2025-08-13T01:25:26.128760275Z" level=info msg="CreateContainer within sandbox \"4634859afd0297fdd9a1630912eabed6e072606ce00f7ccc8753d69a66504c55\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3485339ca1f1baaec5fb5c4b15e045c740a34b56e27b552bfe27064d6b74fd84\"" Aug 13 01:25:26.129896 containerd[1550]: time="2025-08-13T01:25:26.129628946Z" level=info msg="StartContainer for \"3485339ca1f1baaec5fb5c4b15e045c740a34b56e27b552bfe27064d6b74fd84\"" Aug 13 01:25:26.131175 containerd[1550]: time="2025-08-13T01:25:26.131159526Z" level=info msg="connecting to shim 3485339ca1f1baaec5fb5c4b15e045c740a34b56e27b552bfe27064d6b74fd84" address="unix:///run/containerd/s/b139ee016514e5bb81d7b283fc81a91d8c34bc09cf7fc6b5d92967544d46189a" protocol=ttrpc version=3 Aug 13 01:25:26.151018 systemd[1]: Started cri-containerd-3485339ca1f1baaec5fb5c4b15e045c740a34b56e27b552bfe27064d6b74fd84.scope - libcontainer container 3485339ca1f1baaec5fb5c4b15e045c740a34b56e27b552bfe27064d6b74fd84. Aug 13 01:25:26.152871 containerd[1550]: time="2025-08-13T01:25:26.152855297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7n2kj,Uid:025e24fb-7026-4e6f-b2f2-17d07d390180,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94\"" Aug 13 01:25:26.153876 kubelet[2702]: E0813 01:25:26.153826 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:26.155517 containerd[1550]: time="2025-08-13T01:25:26.155502069Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 01:25:26.196043 containerd[1550]: time="2025-08-13T01:25:26.195990819Z" level=info msg="StartContainer for \"3485339ca1f1baaec5fb5c4b15e045c740a34b56e27b552bfe27064d6b74fd84\" returns successfully" Aug 13 01:25:26.207589 kubelet[2702]: E0813 01:25:26.207560 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:26.208517 containerd[1550]: time="2025-08-13T01:25:26.208483165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-2bjcc,Uid:26a9811f-ff63-41d6-b219-d1ffa4ebedec,Namespace:kube-system,Attempt:0,}" Aug 13 01:25:26.228067 containerd[1550]: time="2025-08-13T01:25:26.227723725Z" level=info msg="connecting to shim 4ab2a630fec69e190ef075275cfed65ff6981ecfd052af7baf03bceeb7051f59" address="unix:///run/containerd/s/33cbb466cc87b9856102c0398fd4c2daf1b33c0111176da7fbe43505ee033588" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:25:26.253891 systemd[1]: Started cri-containerd-4ab2a630fec69e190ef075275cfed65ff6981ecfd052af7baf03bceeb7051f59.scope - libcontainer container 4ab2a630fec69e190ef075275cfed65ff6981ecfd052af7baf03bceeb7051f59. Aug 13 01:25:26.298399 containerd[1550]: time="2025-08-13T01:25:26.298317870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-2bjcc,Uid:26a9811f-ff63-41d6-b219-d1ffa4ebedec,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ab2a630fec69e190ef075275cfed65ff6981ecfd052af7baf03bceeb7051f59\"" Aug 13 01:25:26.299274 kubelet[2702]: E0813 01:25:26.299219 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:26.468251 kubelet[2702]: E0813 01:25:26.468217 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:28.352319 kubelet[2702]: E0813 01:25:28.352267 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:28.368469 kubelet[2702]: I0813 01:25:28.368056 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jj7lz" podStartSLOduration=3.368045686 podStartE2EDuration="3.368045686s" podCreationTimestamp="2025-08-13 01:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:25:26.47810434 +0000 UTC m=+7.105339631" watchObservedRunningTime="2025-08-13 01:25:28.368045686 +0000 UTC m=+8.995280957" Aug 13 01:25:28.472254 kubelet[2702]: E0813 01:25:28.471764 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:28.709597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4262087005.mount: Deactivated successfully. Aug 13 01:25:29.060823 kubelet[2702]: E0813 01:25:29.059580 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:29.473234 kubelet[2702]: E0813 01:25:29.472937 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:29.511494 kubelet[2702]: I0813 01:25:29.511461 2702 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:25:29.511575 kubelet[2702]: I0813 01:25:29.511502 2702 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:25:29.515453 kubelet[2702]: I0813 01:25:29.515427 2702 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:25:29.536345 kubelet[2702]: I0813 01:25:29.536187 2702 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:25:29.536399 kubelet[2702]: I0813 01:25:29.536357 2702 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-2bjcc","kube-system/cilium-7n2kj","kube-system/kube-controller-manager-172-233-222-9","kube-system/kube-apiserver-172-233-222-9","kube-system/kube-scheduler-172-233-222-9","kube-system/kube-proxy-jj7lz"] Aug 13 01:25:29.536437 kubelet[2702]: E0813 01:25:29.536406 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-2bjcc" Aug 13 01:25:29.536437 kubelet[2702]: E0813 01:25:29.536420 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-7n2kj" Aug 13 01:25:29.536437 kubelet[2702]: E0813 01:25:29.536432 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-222-9" Aug 13 01:25:29.536508 kubelet[2702]: E0813 01:25:29.536443 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-222-9" Aug 13 01:25:29.536570 kubelet[2702]: E0813 01:25:29.536548 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-222-9" Aug 13 01:25:29.536570 kubelet[2702]: E0813 01:25:29.536566 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jj7lz" Aug 13 01:25:29.536610 kubelet[2702]: I0813 01:25:29.536577 2702 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:25:29.666406 kubelet[2702]: E0813 01:25:29.666327 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:30.096866 containerd[1550]: time="2025-08-13T01:25:30.096359266Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 13 01:25:30.098324 containerd[1550]: time="2025-08-13T01:25:30.098303629Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:30.099297 containerd[1550]: time="2025-08-13T01:25:30.099279246Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 3.943684946s" Aug 13 01:25:30.099363 containerd[1550]: time="2025-08-13T01:25:30.099349417Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 01:25:30.100266 containerd[1550]: time="2025-08-13T01:25:30.099626041Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:30.100747 containerd[1550]: time="2025-08-13T01:25:30.100732901Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 01:25:30.103052 containerd[1550]: time="2025-08-13T01:25:30.103021340Z" level=info msg="CreateContainer within sandbox \"dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 01:25:30.109055 containerd[1550]: time="2025-08-13T01:25:30.109034423Z" level=info msg="Container 05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:25:30.122077 containerd[1550]: time="2025-08-13T01:25:30.122037056Z" level=info msg="CreateContainer within sandbox \"dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7\"" Aug 13 01:25:30.122620 containerd[1550]: time="2025-08-13T01:25:30.122580635Z" level=info msg="StartContainer for \"05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7\"" Aug 13 01:25:30.123302 containerd[1550]: time="2025-08-13T01:25:30.123268807Z" level=info msg="connecting to shim 05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7" address="unix:///run/containerd/s/2ea178bf9a92b82a9216d9b066cecf8cffd5704c76af47439fc173dfbf18c433" protocol=ttrpc version=3 Aug 13 01:25:30.149901 systemd[1]: Started cri-containerd-05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7.scope - libcontainer container 05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7. Aug 13 01:25:30.173941 containerd[1550]: time="2025-08-13T01:25:30.173894445Z" level=info msg="StartContainer for \"05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7\" returns successfully" Aug 13 01:25:30.186621 systemd[1]: cri-containerd-05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7.scope: Deactivated successfully. Aug 13 01:25:30.188010 containerd[1550]: time="2025-08-13T01:25:30.187983046Z" level=info msg="received exit event container_id:\"05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7\" id:\"05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7\" pid:3114 exited_at:{seconds:1755048330 nanos:187662011}" Aug 13 01:25:30.188222 containerd[1550]: time="2025-08-13T01:25:30.188023407Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7\" id:\"05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7\" pid:3114 exited_at:{seconds:1755048330 nanos:187662011}" Aug 13 01:25:30.204963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7-rootfs.mount: Deactivated successfully. Aug 13 01:25:30.476409 kubelet[2702]: E0813 01:25:30.475894 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:30.477230 kubelet[2702]: E0813 01:25:30.476103 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:30.479449 containerd[1550]: time="2025-08-13T01:25:30.479424315Z" level=info msg="CreateContainer within sandbox \"dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 01:25:30.486276 containerd[1550]: time="2025-08-13T01:25:30.486254632Z" level=info msg="Container 3abbfe7bfd3c944fb15baa2e5a6dc843eefe80dbaba32fbe00e34de8168cba85: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:25:30.493149 containerd[1550]: time="2025-08-13T01:25:30.493129180Z" level=info msg="CreateContainer within sandbox \"dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3abbfe7bfd3c944fb15baa2e5a6dc843eefe80dbaba32fbe00e34de8168cba85\"" Aug 13 01:25:30.494177 containerd[1550]: time="2025-08-13T01:25:30.493809621Z" level=info msg="StartContainer for \"3abbfe7bfd3c944fb15baa2e5a6dc843eefe80dbaba32fbe00e34de8168cba85\"" Aug 13 01:25:30.494681 containerd[1550]: time="2025-08-13T01:25:30.494661147Z" level=info msg="connecting to shim 3abbfe7bfd3c944fb15baa2e5a6dc843eefe80dbaba32fbe00e34de8168cba85" address="unix:///run/containerd/s/2ea178bf9a92b82a9216d9b066cecf8cffd5704c76af47439fc173dfbf18c433" protocol=ttrpc version=3 Aug 13 01:25:30.514887 systemd[1]: Started cri-containerd-3abbfe7bfd3c944fb15baa2e5a6dc843eefe80dbaba32fbe00e34de8168cba85.scope - libcontainer container 3abbfe7bfd3c944fb15baa2e5a6dc843eefe80dbaba32fbe00e34de8168cba85. Aug 13 01:25:30.540932 containerd[1550]: time="2025-08-13T01:25:30.540905989Z" level=info msg="StartContainer for \"3abbfe7bfd3c944fb15baa2e5a6dc843eefe80dbaba32fbe00e34de8168cba85\" returns successfully" Aug 13 01:25:30.552053 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:25:30.552229 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:25:30.552626 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:25:30.554666 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:25:30.558255 systemd[1]: cri-containerd-3abbfe7bfd3c944fb15baa2e5a6dc843eefe80dbaba32fbe00e34de8168cba85.scope: Deactivated successfully. Aug 13 01:25:30.559021 containerd[1550]: time="2025-08-13T01:25:30.558994940Z" level=info msg="received exit event container_id:\"3abbfe7bfd3c944fb15baa2e5a6dc843eefe80dbaba32fbe00e34de8168cba85\" id:\"3abbfe7bfd3c944fb15baa2e5a6dc843eefe80dbaba32fbe00e34de8168cba85\" pid:3160 exited_at:{seconds:1755048330 nanos:558845527}" Aug 13 01:25:30.559125 containerd[1550]: time="2025-08-13T01:25:30.559092991Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3abbfe7bfd3c944fb15baa2e5a6dc843eefe80dbaba32fbe00e34de8168cba85\" id:\"3abbfe7bfd3c944fb15baa2e5a6dc843eefe80dbaba32fbe00e34de8168cba85\" pid:3160 exited_at:{seconds:1755048330 nanos:558845527}" Aug 13 01:25:30.577196 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:25:31.084855 containerd[1550]: time="2025-08-13T01:25:31.084817759Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:31.085405 containerd[1550]: time="2025-08-13T01:25:31.085384418Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 13 01:25:31.085875 containerd[1550]: time="2025-08-13T01:25:31.085846375Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:31.087000 containerd[1550]: time="2025-08-13T01:25:31.086915433Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 986.081861ms" Aug 13 01:25:31.087000 containerd[1550]: time="2025-08-13T01:25:31.086943234Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 01:25:31.089387 containerd[1550]: time="2025-08-13T01:25:31.088849314Z" level=info msg="CreateContainer within sandbox \"4ab2a630fec69e190ef075275cfed65ff6981ecfd052af7baf03bceeb7051f59\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 01:25:31.093078 containerd[1550]: time="2025-08-13T01:25:31.093049232Z" level=info msg="Container e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:25:31.112254 containerd[1550]: time="2025-08-13T01:25:31.112238290Z" level=info msg="CreateContainer within sandbox \"4ab2a630fec69e190ef075275cfed65ff6981ecfd052af7baf03bceeb7051f59\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4\"" Aug 13 01:25:31.112929 containerd[1550]: time="2025-08-13T01:25:31.112907972Z" level=info msg="StartContainer for \"e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4\"" Aug 13 01:25:31.113491 containerd[1550]: time="2025-08-13T01:25:31.113474841Z" level=info msg="connecting to shim e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4" address="unix:///run/containerd/s/33cbb466cc87b9856102c0398fd4c2daf1b33c0111176da7fbe43505ee033588" protocol=ttrpc version=3 Aug 13 01:25:31.134893 systemd[1]: Started cri-containerd-e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4.scope - libcontainer container e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4. Aug 13 01:25:31.157683 containerd[1550]: time="2025-08-13T01:25:31.157640622Z" level=info msg="StartContainer for \"e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4\" returns successfully" Aug 13 01:25:31.478604 kubelet[2702]: E0813 01:25:31.478488 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:31.485258 kubelet[2702]: E0813 01:25:31.485225 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:31.487801 containerd[1550]: time="2025-08-13T01:25:31.487729099Z" level=info msg="CreateContainer within sandbox \"dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 01:25:31.500631 containerd[1550]: time="2025-08-13T01:25:31.500557985Z" level=info msg="Container 2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:25:31.511627 containerd[1550]: time="2025-08-13T01:25:31.511523353Z" level=info msg="CreateContainer within sandbox \"dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7\"" Aug 13 01:25:31.513767 containerd[1550]: time="2025-08-13T01:25:31.513727048Z" level=info msg="StartContainer for \"2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7\"" Aug 13 01:25:31.514731 containerd[1550]: time="2025-08-13T01:25:31.514706013Z" level=info msg="connecting to shim 2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7" address="unix:///run/containerd/s/2ea178bf9a92b82a9216d9b066cecf8cffd5704c76af47439fc173dfbf18c433" protocol=ttrpc version=3 Aug 13 01:25:31.540904 systemd[1]: Started cri-containerd-2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7.scope - libcontainer container 2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7. Aug 13 01:25:31.542886 kubelet[2702]: I0813 01:25:31.542253 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-2bjcc" podStartSLOduration=1.755113436 podStartE2EDuration="6.542239107s" podCreationTimestamp="2025-08-13 01:25:25 +0000 UTC" firstStartedPulling="2025-08-13 01:25:26.300380591 +0000 UTC m=+6.927615862" lastFinishedPulling="2025-08-13 01:25:31.087506252 +0000 UTC m=+11.714741533" observedRunningTime="2025-08-13 01:25:31.512905564 +0000 UTC m=+12.140140845" watchObservedRunningTime="2025-08-13 01:25:31.542239107 +0000 UTC m=+12.169474388" Aug 13 01:25:31.580300 containerd[1550]: time="2025-08-13T01:25:31.580261189Z" level=info msg="StartContainer for \"2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7\" returns successfully" Aug 13 01:25:31.582468 systemd[1]: cri-containerd-2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7.scope: Deactivated successfully. Aug 13 01:25:31.584025 containerd[1550]: time="2025-08-13T01:25:31.584004580Z" level=info msg="received exit event container_id:\"2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7\" id:\"2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7\" pid:3255 exited_at:{seconds:1755048331 nanos:583676625}" Aug 13 01:25:31.584123 containerd[1550]: time="2025-08-13T01:25:31.584103481Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7\" id:\"2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7\" pid:3255 exited_at:{seconds:1755048331 nanos:583676625}" Aug 13 01:25:32.108127 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7-rootfs.mount: Deactivated successfully. Aug 13 01:25:32.488523 kubelet[2702]: E0813 01:25:32.488501 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:32.488853 kubelet[2702]: E0813 01:25:32.488719 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:32.490633 containerd[1550]: time="2025-08-13T01:25:32.490605486Z" level=info msg="CreateContainer within sandbox \"dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 01:25:32.499607 containerd[1550]: time="2025-08-13T01:25:32.499563061Z" level=info msg="Container c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:25:32.503777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2571225421.mount: Deactivated successfully. Aug 13 01:25:32.509200 containerd[1550]: time="2025-08-13T01:25:32.509073855Z" level=info msg="CreateContainer within sandbox \"dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc\"" Aug 13 01:25:32.509761 containerd[1550]: time="2025-08-13T01:25:32.509746926Z" level=info msg="StartContainer for \"c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc\"" Aug 13 01:25:32.510703 containerd[1550]: time="2025-08-13T01:25:32.510688120Z" level=info msg="connecting to shim c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc" address="unix:///run/containerd/s/2ea178bf9a92b82a9216d9b066cecf8cffd5704c76af47439fc173dfbf18c433" protocol=ttrpc version=3 Aug 13 01:25:32.530896 systemd[1]: Started cri-containerd-c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc.scope - libcontainer container c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc. Aug 13 01:25:32.553321 containerd[1550]: time="2025-08-13T01:25:32.553065521Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc\" id:\"c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc\" pid:3292 exited_at:{seconds:1755048332 nanos:552941399}" Aug 13 01:25:32.553232 systemd[1]: cri-containerd-c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc.scope: Deactivated successfully. Aug 13 01:25:32.554369 containerd[1550]: time="2025-08-13T01:25:32.554340781Z" level=info msg="received exit event container_id:\"c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc\" id:\"c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc\" pid:3292 exited_at:{seconds:1755048332 nanos:552941399}" Aug 13 01:25:32.560801 containerd[1550]: time="2025-08-13T01:25:32.560768288Z" level=info msg="StartContainer for \"c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc\" returns successfully" Aug 13 01:25:32.571080 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc-rootfs.mount: Deactivated successfully. Aug 13 01:25:33.492588 kubelet[2702]: E0813 01:25:33.492449 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:33.493771 containerd[1550]: time="2025-08-13T01:25:33.493698957Z" level=info msg="CreateContainer within sandbox \"dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 01:25:33.502986 containerd[1550]: time="2025-08-13T01:25:33.502963219Z" level=info msg="Container 0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:25:33.510292 containerd[1550]: time="2025-08-13T01:25:33.510165871Z" level=info msg="CreateContainer within sandbox \"dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320\"" Aug 13 01:25:33.510713 containerd[1550]: time="2025-08-13T01:25:33.510668289Z" level=info msg="StartContainer for \"0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320\"" Aug 13 01:25:33.512369 containerd[1550]: time="2025-08-13T01:25:33.512319261Z" level=info msg="connecting to shim 0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320" address="unix:///run/containerd/s/2ea178bf9a92b82a9216d9b066cecf8cffd5704c76af47439fc173dfbf18c433" protocol=ttrpc version=3 Aug 13 01:25:33.533988 systemd[1]: Started cri-containerd-0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320.scope - libcontainer container 0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320. Aug 13 01:25:33.564769 containerd[1550]: time="2025-08-13T01:25:33.564732787Z" level=info msg="StartContainer for \"0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320\" returns successfully" Aug 13 01:25:33.618245 containerd[1550]: time="2025-08-13T01:25:33.618209797Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320\" id:\"abb6ad4bd052f7ac4935845d00189807c30596721e217c2e108557ffbe20bee4\" pid:3360 exited_at:{seconds:1755048333 nanos:617909643}" Aug 13 01:25:33.648057 kubelet[2702]: I0813 01:25:33.648029 2702 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 01:25:34.496344 kubelet[2702]: E0813 01:25:34.496309 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:34.520861 kubelet[2702]: I0813 01:25:34.520809 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7n2kj" podStartSLOduration=5.575112962 podStartE2EDuration="9.520778924s" podCreationTimestamp="2025-08-13 01:25:25 +0000 UTC" firstStartedPulling="2025-08-13 01:25:26.155008178 +0000 UTC m=+6.782243449" lastFinishedPulling="2025-08-13 01:25:30.10067413 +0000 UTC m=+10.727909411" observedRunningTime="2025-08-13 01:25:34.520081275 +0000 UTC m=+15.147316556" watchObservedRunningTime="2025-08-13 01:25:34.520778924 +0000 UTC m=+15.148014205" Aug 13 01:25:35.498615 kubelet[2702]: E0813 01:25:35.498556 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:35.625691 systemd-networkd[1466]: cilium_host: Link UP Aug 13 01:25:35.627746 systemd-networkd[1466]: cilium_net: Link UP Aug 13 01:25:35.628038 systemd-networkd[1466]: cilium_net: Gained carrier Aug 13 01:25:35.629083 systemd-networkd[1466]: cilium_host: Gained carrier Aug 13 01:25:35.727700 systemd-networkd[1466]: cilium_vxlan: Link UP Aug 13 01:25:35.727712 systemd-networkd[1466]: cilium_vxlan: Gained carrier Aug 13 01:25:35.908826 kernel: NET: Registered PF_ALG protocol family Aug 13 01:25:35.923073 systemd-networkd[1466]: cilium_host: Gained IPv6LL Aug 13 01:25:35.930854 systemd-networkd[1466]: cilium_net: Gained IPv6LL Aug 13 01:25:36.410742 systemd-networkd[1466]: lxc_health: Link UP Aug 13 01:25:36.416036 systemd-networkd[1466]: lxc_health: Gained carrier Aug 13 01:25:36.500688 kubelet[2702]: E0813 01:25:36.500652 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:36.771000 systemd-networkd[1466]: cilium_vxlan: Gained IPv6LL Aug 13 01:25:37.732990 systemd-networkd[1466]: lxc_health: Gained IPv6LL Aug 13 01:25:38.067355 kubelet[2702]: E0813 01:25:38.066175 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:38.379781 update_engine[1518]: I20250813 01:25:38.378861 1518 update_attempter.cc:509] Updating boot flags... Aug 13 01:25:38.510712 kubelet[2702]: E0813 01:25:38.510651 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:39.510165 kubelet[2702]: E0813 01:25:39.509971 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:25:39.553526 kubelet[2702]: I0813 01:25:39.553494 2702 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:25:39.553526 kubelet[2702]: I0813 01:25:39.553532 2702 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:25:39.558529 kubelet[2702]: I0813 01:25:39.558508 2702 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:25:39.576000 kubelet[2702]: I0813 01:25:39.575970 2702 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:25:39.576193 kubelet[2702]: I0813 01:25:39.576173 2702 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-2bjcc","kube-system/cilium-7n2kj","kube-system/kube-controller-manager-172-233-222-9","kube-system/kube-proxy-jj7lz","kube-system/kube-apiserver-172-233-222-9","kube-system/kube-scheduler-172-233-222-9"] Aug 13 01:25:39.576225 kubelet[2702]: E0813 01:25:39.576212 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-2bjcc" Aug 13 01:25:39.576347 kubelet[2702]: E0813 01:25:39.576224 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-7n2kj" Aug 13 01:25:39.576347 kubelet[2702]: E0813 01:25:39.576347 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-222-9" Aug 13 01:25:39.576406 kubelet[2702]: E0813 01:25:39.576355 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jj7lz" Aug 13 01:25:39.576406 kubelet[2702]: E0813 01:25:39.576363 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-222-9" Aug 13 01:25:39.576406 kubelet[2702]: E0813 01:25:39.576370 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-222-9" Aug 13 01:25:39.576406 kubelet[2702]: I0813 01:25:39.576379 2702 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:25:49.588699 kubelet[2702]: I0813 01:25:49.588672 2702 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:25:49.588699 kubelet[2702]: I0813 01:25:49.588704 2702 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:25:49.590445 kubelet[2702]: I0813 01:25:49.590427 2702 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:25:49.598996 kubelet[2702]: I0813 01:25:49.598982 2702 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:25:49.599071 kubelet[2702]: I0813 01:25:49.599050 2702 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-2bjcc","kube-system/cilium-7n2kj","kube-system/kube-controller-manager-172-233-222-9","kube-system/kube-proxy-jj7lz","kube-system/kube-apiserver-172-233-222-9","kube-system/kube-scheduler-172-233-222-9"] Aug 13 01:25:49.599118 kubelet[2702]: E0813 01:25:49.599076 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-2bjcc" Aug 13 01:25:49.599118 kubelet[2702]: E0813 01:25:49.599086 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-7n2kj" Aug 13 01:25:49.599118 kubelet[2702]: E0813 01:25:49.599095 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-222-9" Aug 13 01:25:49.599118 kubelet[2702]: E0813 01:25:49.599102 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jj7lz" Aug 13 01:25:49.599118 kubelet[2702]: E0813 01:25:49.599108 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-222-9" Aug 13 01:25:49.599118 kubelet[2702]: E0813 01:25:49.599117 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-222-9" Aug 13 01:25:49.599226 kubelet[2702]: I0813 01:25:49.599124 2702 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:25:59.613049 kubelet[2702]: I0813 01:25:59.612994 2702 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:25:59.613049 kubelet[2702]: I0813 01:25:59.613038 2702 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:25:59.614693 kubelet[2702]: I0813 01:25:59.614676 2702 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:25:59.622631 kubelet[2702]: I0813 01:25:59.622617 2702 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:25:59.622705 kubelet[2702]: I0813 01:25:59.622687 2702 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-2bjcc","kube-system/cilium-7n2kj","kube-system/kube-controller-manager-172-233-222-9","kube-system/kube-proxy-jj7lz","kube-system/kube-apiserver-172-233-222-9","kube-system/kube-scheduler-172-233-222-9"] Aug 13 01:25:59.622732 kubelet[2702]: E0813 01:25:59.622720 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-2bjcc" Aug 13 01:25:59.622732 kubelet[2702]: E0813 01:25:59.622730 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-7n2kj" Aug 13 01:25:59.622781 kubelet[2702]: E0813 01:25:59.622737 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-222-9" Aug 13 01:25:59.622781 kubelet[2702]: E0813 01:25:59.622744 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jj7lz" Aug 13 01:25:59.622781 kubelet[2702]: E0813 01:25:59.622750 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-222-9" Aug 13 01:25:59.622781 kubelet[2702]: E0813 01:25:59.622757 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-222-9" Aug 13 01:25:59.622781 kubelet[2702]: I0813 01:25:59.622764 2702 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:26:09.637414 kubelet[2702]: I0813 01:26:09.637385 2702 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:26:09.637414 kubelet[2702]: I0813 01:26:09.637420 2702 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:26:09.639049 kubelet[2702]: I0813 01:26:09.639029 2702 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:26:09.647228 kubelet[2702]: I0813 01:26:09.647197 2702 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:26:09.647277 kubelet[2702]: I0813 01:26:09.647262 2702 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-2bjcc","kube-system/cilium-7n2kj","kube-system/kube-controller-manager-172-233-222-9","kube-system/kube-proxy-jj7lz","kube-system/kube-apiserver-172-233-222-9","kube-system/kube-scheduler-172-233-222-9"] Aug 13 01:26:09.647326 kubelet[2702]: E0813 01:26:09.647289 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-2bjcc" Aug 13 01:26:09.647326 kubelet[2702]: E0813 01:26:09.647299 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-7n2kj" Aug 13 01:26:09.647326 kubelet[2702]: E0813 01:26:09.647318 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-222-9" Aug 13 01:26:09.647326 kubelet[2702]: E0813 01:26:09.647327 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jj7lz" Aug 13 01:26:09.647404 kubelet[2702]: E0813 01:26:09.647334 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-222-9" Aug 13 01:26:09.647404 kubelet[2702]: E0813 01:26:09.647340 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-222-9" Aug 13 01:26:09.647404 kubelet[2702]: I0813 01:26:09.647347 2702 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:26:19.667906 kubelet[2702]: I0813 01:26:19.667850 2702 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:26:19.667906 kubelet[2702]: I0813 01:26:19.667904 2702 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:26:19.671128 kubelet[2702]: I0813 01:26:19.671085 2702 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:26:19.682596 kubelet[2702]: I0813 01:26:19.682566 2702 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:26:19.682723 kubelet[2702]: I0813 01:26:19.682643 2702 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-2bjcc","kube-system/cilium-7n2kj","kube-system/kube-controller-manager-172-233-222-9","kube-system/kube-proxy-jj7lz","kube-system/kube-apiserver-172-233-222-9","kube-system/kube-scheduler-172-233-222-9"] Aug 13 01:26:19.682723 kubelet[2702]: E0813 01:26:19.682672 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-2bjcc" Aug 13 01:26:19.682723 kubelet[2702]: E0813 01:26:19.682683 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-7n2kj" Aug 13 01:26:19.682723 kubelet[2702]: E0813 01:26:19.682690 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-222-9" Aug 13 01:26:19.682723 kubelet[2702]: E0813 01:26:19.682700 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jj7lz" Aug 13 01:26:19.682723 kubelet[2702]: E0813 01:26:19.682708 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-222-9" Aug 13 01:26:19.682723 kubelet[2702]: E0813 01:26:19.682715 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-222-9" Aug 13 01:26:19.682723 kubelet[2702]: I0813 01:26:19.682722 2702 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:26:29.697701 kubelet[2702]: I0813 01:26:29.697582 2702 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:26:29.697701 kubelet[2702]: I0813 01:26:29.697684 2702 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:26:29.701013 kubelet[2702]: I0813 01:26:29.700949 2702 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:26:29.718296 kubelet[2702]: I0813 01:26:29.718269 2702 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:26:29.718409 kubelet[2702]: I0813 01:26:29.718383 2702 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-2bjcc","kube-system/cilium-7n2kj","kube-system/kube-controller-manager-172-233-222-9","kube-system/kube-proxy-jj7lz","kube-system/kube-apiserver-172-233-222-9","kube-system/kube-scheduler-172-233-222-9"] Aug 13 01:26:29.718449 kubelet[2702]: E0813 01:26:29.718427 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-2bjcc" Aug 13 01:26:29.718449 kubelet[2702]: E0813 01:26:29.718439 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-7n2kj" Aug 13 01:26:29.718449 kubelet[2702]: E0813 01:26:29.718449 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-222-9" Aug 13 01:26:29.718504 kubelet[2702]: E0813 01:26:29.718458 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jj7lz" Aug 13 01:26:29.718504 kubelet[2702]: E0813 01:26:29.718469 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-222-9" Aug 13 01:26:29.718504 kubelet[2702]: E0813 01:26:29.718476 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-222-9" Aug 13 01:26:29.718504 kubelet[2702]: I0813 01:26:29.718486 2702 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:26:31.449752 kubelet[2702]: E0813 01:26:31.449247 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:26:33.449743 kubelet[2702]: E0813 01:26:33.449130 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:26:34.448356 kubelet[2702]: E0813 01:26:34.448278 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:26:39.737814 kubelet[2702]: I0813 01:26:39.737109 2702 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:26:39.737814 kubelet[2702]: I0813 01:26:39.737159 2702 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:26:39.743717 kubelet[2702]: I0813 01:26:39.743690 2702 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:26:39.758090 kubelet[2702]: I0813 01:26:39.758063 2702 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:26:39.758299 kubelet[2702]: I0813 01:26:39.758259 2702 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-2bjcc","kube-system/cilium-7n2kj","kube-system/kube-controller-manager-172-233-222-9","kube-system/kube-proxy-jj7lz","kube-system/kube-apiserver-172-233-222-9","kube-system/kube-scheduler-172-233-222-9"] Aug 13 01:26:39.758299 kubelet[2702]: E0813 01:26:39.758301 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-2bjcc" Aug 13 01:26:39.758299 kubelet[2702]: E0813 01:26:39.758314 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-7n2kj" Aug 13 01:26:39.758299 kubelet[2702]: E0813 01:26:39.758328 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-222-9" Aug 13 01:26:39.758538 kubelet[2702]: E0813 01:26:39.758337 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jj7lz" Aug 13 01:26:39.758538 kubelet[2702]: E0813 01:26:39.758347 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-222-9" Aug 13 01:26:39.758538 kubelet[2702]: E0813 01:26:39.758355 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-222-9" Aug 13 01:26:39.758538 kubelet[2702]: I0813 01:26:39.758365 2702 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:26:48.449901 kubelet[2702]: E0813 01:26:48.449603 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:26:49.774025 kubelet[2702]: I0813 01:26:49.773986 2702 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:26:49.774025 kubelet[2702]: I0813 01:26:49.774035 2702 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:26:49.777624 kubelet[2702]: I0813 01:26:49.777602 2702 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:26:49.790208 kubelet[2702]: I0813 01:26:49.790164 2702 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:26:49.790380 kubelet[2702]: I0813 01:26:49.790309 2702 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-2bjcc","kube-system/cilium-7n2kj","kube-system/kube-controller-manager-172-233-222-9","kube-system/kube-proxy-jj7lz","kube-system/kube-apiserver-172-233-222-9","kube-system/kube-scheduler-172-233-222-9"] Aug 13 01:26:49.790380 kubelet[2702]: E0813 01:26:49.790350 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-2bjcc" Aug 13 01:26:49.790380 kubelet[2702]: E0813 01:26:49.790363 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-7n2kj" Aug 13 01:26:49.790380 kubelet[2702]: E0813 01:26:49.790375 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-222-9" Aug 13 01:26:49.790380 kubelet[2702]: E0813 01:26:49.790384 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jj7lz" Aug 13 01:26:49.790667 kubelet[2702]: E0813 01:26:49.790395 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-222-9" Aug 13 01:26:49.790667 kubelet[2702]: E0813 01:26:49.790404 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-222-9" Aug 13 01:26:49.790667 kubelet[2702]: I0813 01:26:49.790414 2702 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:26:55.449849 kubelet[2702]: E0813 01:26:55.449601 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:26:55.452118 kubelet[2702]: E0813 01:26:55.450642 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:26:59.805158 kubelet[2702]: I0813 01:26:59.805107 2702 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:26:59.805158 kubelet[2702]: I0813 01:26:59.805151 2702 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:26:59.806805 kubelet[2702]: I0813 01:26:59.806569 2702 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:26:59.818132 kubelet[2702]: I0813 01:26:59.818104 2702 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:26:59.818530 kubelet[2702]: I0813 01:26:59.818506 2702 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-2bjcc","kube-system/cilium-7n2kj","kube-system/kube-proxy-jj7lz","kube-system/kube-controller-manager-172-233-222-9","kube-system/kube-apiserver-172-233-222-9","kube-system/kube-scheduler-172-233-222-9"] Aug 13 01:26:59.818614 kubelet[2702]: E0813 01:26:59.818556 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-2bjcc" Aug 13 01:26:59.818639 kubelet[2702]: E0813 01:26:59.818619 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-7n2kj" Aug 13 01:26:59.818639 kubelet[2702]: E0813 01:26:59.818627 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jj7lz" Aug 13 01:26:59.818639 kubelet[2702]: E0813 01:26:59.818636 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-222-9" Aug 13 01:26:59.818806 kubelet[2702]: E0813 01:26:59.818646 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-222-9" Aug 13 01:26:59.818806 kubelet[2702]: E0813 01:26:59.818769 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-222-9" Aug 13 01:26:59.818806 kubelet[2702]: I0813 01:26:59.818777 2702 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:27:09.835358 kubelet[2702]: I0813 01:27:09.835303 2702 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:27:09.835358 kubelet[2702]: I0813 01:27:09.835369 2702 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:27:09.838368 kubelet[2702]: I0813 01:27:09.838320 2702 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:27:09.849794 kubelet[2702]: I0813 01:27:09.849750 2702 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:27:09.849933 kubelet[2702]: I0813 01:27:09.849910 2702 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-2bjcc","kube-system/cilium-7n2kj","kube-system/kube-controller-manager-172-233-222-9","kube-system/kube-proxy-jj7lz","kube-system/kube-apiserver-172-233-222-9","kube-system/kube-scheduler-172-233-222-9"] Aug 13 01:27:09.849983 kubelet[2702]: E0813 01:27:09.849969 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-2bjcc" Aug 13 01:27:09.850016 kubelet[2702]: E0813 01:27:09.849984 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-7n2kj" Aug 13 01:27:09.850016 kubelet[2702]: E0813 01:27:09.849993 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-222-9" Aug 13 01:27:09.850016 kubelet[2702]: E0813 01:27:09.850003 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jj7lz" Aug 13 01:27:09.850016 kubelet[2702]: E0813 01:27:09.850014 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-222-9" Aug 13 01:27:09.850086 kubelet[2702]: E0813 01:27:09.850039 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-222-9" Aug 13 01:27:09.850086 kubelet[2702]: I0813 01:27:09.850048 2702 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:27:19.861370 kubelet[2702]: I0813 01:27:19.861344 2702 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:27:19.861370 kubelet[2702]: I0813 01:27:19.861375 2702 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:27:19.863049 kubelet[2702]: I0813 01:27:19.863037 2702 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:27:19.864378 kubelet[2702]: I0813 01:27:19.864359 2702 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" size=56909194 runtimeHandler="" Aug 13 01:27:19.864576 containerd[1550]: time="2025-08-13T01:27:19.864542126Z" level=info msg="RemoveImage \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 01:27:19.865976 containerd[1550]: time="2025-08-13T01:27:19.865955766Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 01:27:19.866863 containerd[1550]: time="2025-08-13T01:27:19.866768789Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\"" Aug 13 01:27:19.867226 containerd[1550]: time="2025-08-13T01:27:19.867170785Z" level=info msg="RemoveImage \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" returns successfully" Aug 13 01:27:19.867262 containerd[1550]: time="2025-08-13T01:27:19.867237405Z" level=info msg="ImageDelete event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 01:27:19.867507 kubelet[2702]: I0813 01:27:19.867398 2702 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" size=18562039 runtimeHandler="" Aug 13 01:27:19.867764 containerd[1550]: time="2025-08-13T01:27:19.867735552Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:27:19.868503 containerd[1550]: time="2025-08-13T01:27:19.868480995Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:27:19.868910 containerd[1550]: time="2025-08-13T01:27:19.868893432Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\"" Aug 13 01:27:19.869239 containerd[1550]: time="2025-08-13T01:27:19.869222149Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" returns successfully" Aug 13 01:27:19.869290 containerd[1550]: time="2025-08-13T01:27:19.869270189Z" level=info msg="ImageDelete event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:27:19.875466 kubelet[2702]: I0813 01:27:19.875453 2702 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:27:19.875539 kubelet[2702]: I0813 01:27:19.875527 2702 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-2bjcc","kube-system/cilium-7n2kj","kube-system/kube-controller-manager-172-233-222-9","kube-system/kube-proxy-jj7lz","kube-system/kube-apiserver-172-233-222-9","kube-system/kube-scheduler-172-233-222-9"] Aug 13 01:27:19.875566 kubelet[2702]: E0813 01:27:19.875553 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-2bjcc" Aug 13 01:27:19.875566 kubelet[2702]: E0813 01:27:19.875562 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-7n2kj" Aug 13 01:27:19.875615 kubelet[2702]: E0813 01:27:19.875570 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-222-9" Aug 13 01:27:19.875615 kubelet[2702]: E0813 01:27:19.875578 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jj7lz" Aug 13 01:27:19.875615 kubelet[2702]: E0813 01:27:19.875585 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-222-9" Aug 13 01:27:19.875615 kubelet[2702]: E0813 01:27:19.875593 2702 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-222-9" Aug 13 01:27:19.875615 kubelet[2702]: I0813 01:27:19.875600 2702 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:27:24.127576 systemd[1]: Started sshd@7-172.233.222.9:22-147.75.109.163:47912.service - OpenSSH per-connection server daemon (147.75.109.163:47912). Aug 13 01:27:24.452488 sshd[3818]: Accepted publickey for core from 147.75.109.163 port 47912 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:27:24.453485 sshd-session[3818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:27:24.457469 systemd-logind[1515]: New session 8 of user core. Aug 13 01:27:24.464886 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 01:27:24.748240 sshd[3820]: Connection closed by 147.75.109.163 port 47912 Aug 13 01:27:24.748985 sshd-session[3818]: pam_unix(sshd:session): session closed for user core Aug 13 01:27:24.754015 systemd[1]: sshd@7-172.233.222.9:22-147.75.109.163:47912.service: Deactivated successfully. Aug 13 01:27:24.755730 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 01:27:24.756414 systemd-logind[1515]: Session 8 logged out. Waiting for processes to exit. Aug 13 01:27:24.757937 systemd-logind[1515]: Removed session 8. Aug 13 01:27:29.816241 systemd[1]: Started sshd@8-172.233.222.9:22-147.75.109.163:34660.service - OpenSSH per-connection server daemon (147.75.109.163:34660). Aug 13 01:27:30.150660 sshd[3835]: Accepted publickey for core from 147.75.109.163 port 34660 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:27:30.152368 sshd-session[3835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:27:30.158011 systemd-logind[1515]: New session 9 of user core. Aug 13 01:27:30.161941 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 01:27:30.454531 sshd[3837]: Connection closed by 147.75.109.163 port 34660 Aug 13 01:27:30.455274 sshd-session[3835]: pam_unix(sshd:session): session closed for user core Aug 13 01:27:30.459616 systemd-logind[1515]: Session 9 logged out. Waiting for processes to exit. Aug 13 01:27:30.460068 systemd[1]: sshd@8-172.233.222.9:22-147.75.109.163:34660.service: Deactivated successfully. Aug 13 01:27:30.462887 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 01:27:30.467313 systemd-logind[1515]: Removed session 9. Aug 13 01:27:35.519950 systemd[1]: Started sshd@9-172.233.222.9:22-147.75.109.163:34676.service - OpenSSH per-connection server daemon (147.75.109.163:34676). Aug 13 01:27:35.861243 sshd[3850]: Accepted publickey for core from 147.75.109.163 port 34676 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:27:35.863023 sshd-session[3850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:27:35.868802 systemd-logind[1515]: New session 10 of user core. Aug 13 01:27:35.871891 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 01:27:36.164644 sshd[3852]: Connection closed by 147.75.109.163 port 34676 Aug 13 01:27:36.165370 sshd-session[3850]: pam_unix(sshd:session): session closed for user core Aug 13 01:27:36.168839 systemd[1]: sshd@9-172.233.222.9:22-147.75.109.163:34676.service: Deactivated successfully. Aug 13 01:27:36.169114 systemd-logind[1515]: Session 10 logged out. Waiting for processes to exit. Aug 13 01:27:36.170597 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 01:27:36.171837 systemd-logind[1515]: Removed session 10. Aug 13 01:27:36.221168 systemd[1]: Started sshd@10-172.233.222.9:22-147.75.109.163:34678.service - OpenSSH per-connection server daemon (147.75.109.163:34678). Aug 13 01:27:36.546430 sshd[3865]: Accepted publickey for core from 147.75.109.163 port 34678 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:27:36.547568 sshd-session[3865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:27:36.551831 systemd-logind[1515]: New session 11 of user core. Aug 13 01:27:36.557877 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 01:27:36.851612 sshd[3867]: Connection closed by 147.75.109.163 port 34678 Aug 13 01:27:36.852339 sshd-session[3865]: pam_unix(sshd:session): session closed for user core Aug 13 01:27:36.857629 systemd-logind[1515]: Session 11 logged out. Waiting for processes to exit. Aug 13 01:27:36.858246 systemd[1]: sshd@10-172.233.222.9:22-147.75.109.163:34678.service: Deactivated successfully. Aug 13 01:27:36.860210 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 01:27:36.861505 systemd-logind[1515]: Removed session 11. Aug 13 01:27:36.913397 systemd[1]: Started sshd@11-172.233.222.9:22-147.75.109.163:34694.service - OpenSSH per-connection server daemon (147.75.109.163:34694). Aug 13 01:27:37.249053 sshd[3876]: Accepted publickey for core from 147.75.109.163 port 34694 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:27:37.250763 sshd-session[3876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:27:37.257031 systemd-logind[1515]: New session 12 of user core. Aug 13 01:27:37.266946 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 01:27:37.541734 sshd[3878]: Connection closed by 147.75.109.163 port 34694 Aug 13 01:27:37.542774 sshd-session[3876]: pam_unix(sshd:session): session closed for user core Aug 13 01:27:37.547547 systemd[1]: sshd@11-172.233.222.9:22-147.75.109.163:34694.service: Deactivated successfully. Aug 13 01:27:37.549757 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 01:27:37.551105 systemd-logind[1515]: Session 12 logged out. Waiting for processes to exit. Aug 13 01:27:37.553191 systemd-logind[1515]: Removed session 12. Aug 13 01:27:39.448930 kubelet[2702]: E0813 01:27:39.448640 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:27:42.608996 systemd[1]: Started sshd@12-172.233.222.9:22-147.75.109.163:55142.service - OpenSSH per-connection server daemon (147.75.109.163:55142). Aug 13 01:27:42.951165 sshd[3890]: Accepted publickey for core from 147.75.109.163 port 55142 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:27:42.953347 sshd-session[3890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:27:42.959083 systemd-logind[1515]: New session 13 of user core. Aug 13 01:27:42.961924 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 01:27:43.261492 sshd[3892]: Connection closed by 147.75.109.163 port 55142 Aug 13 01:27:43.262231 sshd-session[3890]: pam_unix(sshd:session): session closed for user core Aug 13 01:27:43.266276 systemd-logind[1515]: Session 13 logged out. Waiting for processes to exit. Aug 13 01:27:43.267145 systemd[1]: sshd@12-172.233.222.9:22-147.75.109.163:55142.service: Deactivated successfully. Aug 13 01:27:43.269021 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 01:27:43.270408 systemd-logind[1515]: Removed session 13. Aug 13 01:27:48.324082 systemd[1]: Started sshd@13-172.233.222.9:22-147.75.109.163:47870.service - OpenSSH per-connection server daemon (147.75.109.163:47870). Aug 13 01:27:48.449808 kubelet[2702]: E0813 01:27:48.448909 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:27:48.655177 sshd[3916]: Accepted publickey for core from 147.75.109.163 port 47870 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:27:48.657107 sshd-session[3916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:27:48.663224 systemd-logind[1515]: New session 14 of user core. Aug 13 01:27:48.669925 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 01:27:48.949524 sshd[3918]: Connection closed by 147.75.109.163 port 47870 Aug 13 01:27:48.950511 sshd-session[3916]: pam_unix(sshd:session): session closed for user core Aug 13 01:27:48.954939 systemd-logind[1515]: Session 14 logged out. Waiting for processes to exit. Aug 13 01:27:48.955128 systemd[1]: sshd@13-172.233.222.9:22-147.75.109.163:47870.service: Deactivated successfully. Aug 13 01:27:48.956984 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 01:27:48.958458 systemd-logind[1515]: Removed session 14. Aug 13 01:27:49.011360 systemd[1]: Started sshd@14-172.233.222.9:22-147.75.109.163:47884.service - OpenSSH per-connection server daemon (147.75.109.163:47884). Aug 13 01:27:49.349690 sshd[3930]: Accepted publickey for core from 147.75.109.163 port 47884 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:27:49.351532 sshd-session[3930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:27:49.357830 systemd-logind[1515]: New session 15 of user core. Aug 13 01:27:49.361972 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 01:27:49.672438 sshd[3932]: Connection closed by 147.75.109.163 port 47884 Aug 13 01:27:49.673156 sshd-session[3930]: pam_unix(sshd:session): session closed for user core Aug 13 01:27:49.677592 systemd-logind[1515]: Session 15 logged out. Waiting for processes to exit. Aug 13 01:27:49.678637 systemd[1]: sshd@14-172.233.222.9:22-147.75.109.163:47884.service: Deactivated successfully. Aug 13 01:27:49.680501 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 01:27:49.681881 systemd-logind[1515]: Removed session 15. Aug 13 01:27:49.734509 systemd[1]: Started sshd@15-172.233.222.9:22-147.75.109.163:47896.service - OpenSSH per-connection server daemon (147.75.109.163:47896). Aug 13 01:27:50.080178 sshd[3942]: Accepted publickey for core from 147.75.109.163 port 47896 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:27:50.081994 sshd-session[3942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:27:50.086897 systemd-logind[1515]: New session 16 of user core. Aug 13 01:27:50.094903 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 01:27:51.193051 sshd[3944]: Connection closed by 147.75.109.163 port 47896 Aug 13 01:27:51.193739 sshd-session[3942]: pam_unix(sshd:session): session closed for user core Aug 13 01:27:51.197670 systemd-logind[1515]: Session 16 logged out. Waiting for processes to exit. Aug 13 01:27:51.198149 systemd[1]: sshd@15-172.233.222.9:22-147.75.109.163:47896.service: Deactivated successfully. Aug 13 01:27:51.200047 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 01:27:51.201901 systemd-logind[1515]: Removed session 16. Aug 13 01:27:51.253653 systemd[1]: Started sshd@16-172.233.222.9:22-147.75.109.163:47914.service - OpenSSH per-connection server daemon (147.75.109.163:47914). Aug 13 01:27:51.589084 sshd[3961]: Accepted publickey for core from 147.75.109.163 port 47914 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:27:51.590755 sshd-session[3961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:27:51.596376 systemd-logind[1515]: New session 17 of user core. Aug 13 01:27:51.600017 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 01:27:51.970058 sshd[3963]: Connection closed by 147.75.109.163 port 47914 Aug 13 01:27:51.970870 sshd-session[3961]: pam_unix(sshd:session): session closed for user core Aug 13 01:27:51.974740 systemd-logind[1515]: Session 17 logged out. Waiting for processes to exit. Aug 13 01:27:51.975053 systemd[1]: sshd@16-172.233.222.9:22-147.75.109.163:47914.service: Deactivated successfully. Aug 13 01:27:51.976913 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 01:27:51.978565 systemd-logind[1515]: Removed session 17. Aug 13 01:27:52.033904 systemd[1]: Started sshd@17-172.233.222.9:22-147.75.109.163:47918.service - OpenSSH per-connection server daemon (147.75.109.163:47918). Aug 13 01:27:52.375705 sshd[3973]: Accepted publickey for core from 147.75.109.163 port 47918 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:27:52.376306 sshd-session[3973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:27:52.382184 systemd-logind[1515]: New session 18 of user core. Aug 13 01:27:52.386932 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 01:27:52.670036 sshd[3975]: Connection closed by 147.75.109.163 port 47918 Aug 13 01:27:52.670547 sshd-session[3973]: pam_unix(sshd:session): session closed for user core Aug 13 01:27:52.676073 systemd-logind[1515]: Session 18 logged out. Waiting for processes to exit. Aug 13 01:27:52.677034 systemd[1]: sshd@17-172.233.222.9:22-147.75.109.163:47918.service: Deactivated successfully. Aug 13 01:27:52.679585 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 01:27:52.682021 systemd-logind[1515]: Removed session 18. Aug 13 01:27:53.449667 kubelet[2702]: E0813 01:27:53.449320 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:27:57.448838 kubelet[2702]: E0813 01:27:57.448446 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:27:57.738440 systemd[1]: Started sshd@18-172.233.222.9:22-147.75.109.163:47926.service - OpenSSH per-connection server daemon (147.75.109.163:47926). Aug 13 01:27:58.070506 sshd[3992]: Accepted publickey for core from 147.75.109.163 port 47926 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:27:58.072562 sshd-session[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:27:58.083235 systemd-logind[1515]: New session 19 of user core. Aug 13 01:27:58.085944 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 01:27:58.378969 sshd[3994]: Connection closed by 147.75.109.163 port 47926 Aug 13 01:27:58.379559 sshd-session[3992]: pam_unix(sshd:session): session closed for user core Aug 13 01:27:58.386281 systemd[1]: sshd@18-172.233.222.9:22-147.75.109.163:47926.service: Deactivated successfully. Aug 13 01:27:58.389990 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 01:27:58.391666 systemd-logind[1515]: Session 19 logged out. Waiting for processes to exit. Aug 13 01:27:58.393485 systemd-logind[1515]: Removed session 19. Aug 13 01:28:03.447229 systemd[1]: Started sshd@19-172.233.222.9:22-147.75.109.163:56744.service - OpenSSH per-connection server daemon (147.75.109.163:56744). Aug 13 01:28:03.787845 sshd[4006]: Accepted publickey for core from 147.75.109.163 port 56744 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:28:03.788449 sshd-session[4006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:28:03.797052 systemd-logind[1515]: New session 20 of user core. Aug 13 01:28:03.804949 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 01:28:04.090607 sshd[4008]: Connection closed by 147.75.109.163 port 56744 Aug 13 01:28:04.091502 sshd-session[4006]: pam_unix(sshd:session): session closed for user core Aug 13 01:28:04.096627 systemd[1]: sshd@19-172.233.222.9:22-147.75.109.163:56744.service: Deactivated successfully. Aug 13 01:28:04.099430 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 01:28:04.100636 systemd-logind[1515]: Session 20 logged out. Waiting for processes to exit. Aug 13 01:28:04.103352 systemd-logind[1515]: Removed session 20. Aug 13 01:28:09.153016 systemd[1]: Started sshd@20-172.233.222.9:22-147.75.109.163:37066.service - OpenSSH per-connection server daemon (147.75.109.163:37066). Aug 13 01:28:09.482261 sshd[4028]: Accepted publickey for core from 147.75.109.163 port 37066 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:28:09.483655 sshd-session[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:28:09.487904 systemd-logind[1515]: New session 21 of user core. Aug 13 01:28:09.495910 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 01:28:09.766442 sshd[4030]: Connection closed by 147.75.109.163 port 37066 Aug 13 01:28:09.767029 sshd-session[4028]: pam_unix(sshd:session): session closed for user core Aug 13 01:28:09.771228 systemd-logind[1515]: Session 21 logged out. Waiting for processes to exit. Aug 13 01:28:09.771549 systemd[1]: sshd@20-172.233.222.9:22-147.75.109.163:37066.service: Deactivated successfully. Aug 13 01:28:09.773434 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 01:28:09.775138 systemd-logind[1515]: Removed session 21. Aug 13 01:28:14.449479 kubelet[2702]: E0813 01:28:14.449436 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:28:14.830424 systemd[1]: Started sshd@21-172.233.222.9:22-147.75.109.163:37078.service - OpenSSH per-connection server daemon (147.75.109.163:37078). Aug 13 01:28:15.163065 sshd[4042]: Accepted publickey for core from 147.75.109.163 port 37078 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:28:15.164403 sshd-session[4042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:28:15.170521 systemd-logind[1515]: New session 22 of user core. Aug 13 01:28:15.177938 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 01:28:15.459505 sshd[4044]: Connection closed by 147.75.109.163 port 37078 Aug 13 01:28:15.460568 sshd-session[4042]: pam_unix(sshd:session): session closed for user core Aug 13 01:28:15.467924 systemd[1]: sshd@21-172.233.222.9:22-147.75.109.163:37078.service: Deactivated successfully. Aug 13 01:28:15.470651 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 01:28:15.472152 systemd-logind[1515]: Session 22 logged out. Waiting for processes to exit. Aug 13 01:28:15.474108 systemd-logind[1515]: Removed session 22. Aug 13 01:28:20.530217 systemd[1]: Started sshd@22-172.233.222.9:22-147.75.109.163:42300.service - OpenSSH per-connection server daemon (147.75.109.163:42300). Aug 13 01:28:20.870441 sshd[4059]: Accepted publickey for core from 147.75.109.163 port 42300 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:28:20.872250 sshd-session[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:28:20.877083 systemd-logind[1515]: New session 23 of user core. Aug 13 01:28:20.885907 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 01:28:21.176679 sshd[4061]: Connection closed by 147.75.109.163 port 42300 Aug 13 01:28:21.177598 sshd-session[4059]: pam_unix(sshd:session): session closed for user core Aug 13 01:28:21.184293 systemd[1]: sshd@22-172.233.222.9:22-147.75.109.163:42300.service: Deactivated successfully. Aug 13 01:28:21.187762 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 01:28:21.189151 systemd-logind[1515]: Session 23 logged out. Waiting for processes to exit. Aug 13 01:28:21.191167 systemd-logind[1515]: Removed session 23. Aug 13 01:28:24.449218 kubelet[2702]: E0813 01:28:24.449158 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:28:26.242064 systemd[1]: Started sshd@23-172.233.222.9:22-147.75.109.163:42304.service - OpenSSH per-connection server daemon (147.75.109.163:42304). Aug 13 01:28:26.575295 sshd[4073]: Accepted publickey for core from 147.75.109.163 port 42304 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:28:26.576832 sshd-session[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:28:26.581866 systemd-logind[1515]: New session 24 of user core. Aug 13 01:28:26.586944 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 01:28:26.879697 sshd[4077]: Connection closed by 147.75.109.163 port 42304 Aug 13 01:28:26.880660 sshd-session[4073]: pam_unix(sshd:session): session closed for user core Aug 13 01:28:26.884843 systemd[1]: sshd@23-172.233.222.9:22-147.75.109.163:42304.service: Deactivated successfully. Aug 13 01:28:26.887057 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 01:28:26.887956 systemd-logind[1515]: Session 24 logged out. Waiting for processes to exit. Aug 13 01:28:26.889441 systemd-logind[1515]: Removed session 24. Aug 13 01:28:31.946508 systemd[1]: Started sshd@24-172.233.222.9:22-147.75.109.163:56420.service - OpenSSH per-connection server daemon (147.75.109.163:56420). Aug 13 01:28:32.293549 sshd[4089]: Accepted publickey for core from 147.75.109.163 port 56420 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:28:32.294896 sshd-session[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:28:32.299839 systemd-logind[1515]: New session 25 of user core. Aug 13 01:28:32.308903 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 01:28:32.589893 sshd[4091]: Connection closed by 147.75.109.163 port 56420 Aug 13 01:28:32.590739 sshd-session[4089]: pam_unix(sshd:session): session closed for user core Aug 13 01:28:32.594980 systemd[1]: sshd@24-172.233.222.9:22-147.75.109.163:56420.service: Deactivated successfully. Aug 13 01:28:32.597167 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 01:28:32.598567 systemd-logind[1515]: Session 25 logged out. Waiting for processes to exit. Aug 13 01:28:32.599759 systemd-logind[1515]: Removed session 25. Aug 13 01:28:37.652445 systemd[1]: Started sshd@25-172.233.222.9:22-147.75.109.163:56428.service - OpenSSH per-connection server daemon (147.75.109.163:56428). Aug 13 01:28:37.984511 sshd[4103]: Accepted publickey for core from 147.75.109.163 port 56428 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:28:37.986005 sshd-session[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:28:37.990861 systemd-logind[1515]: New session 26 of user core. Aug 13 01:28:38.002909 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 01:28:38.271875 sshd[4105]: Connection closed by 147.75.109.163 port 56428 Aug 13 01:28:38.272546 sshd-session[4103]: pam_unix(sshd:session): session closed for user core Aug 13 01:28:38.276901 systemd[1]: sshd@25-172.233.222.9:22-147.75.109.163:56428.service: Deactivated successfully. Aug 13 01:28:38.279022 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 01:28:38.279888 systemd-logind[1515]: Session 26 logged out. Waiting for processes to exit. Aug 13 01:28:38.282020 systemd-logind[1515]: Removed session 26. Aug 13 01:28:43.343389 systemd[1]: Started sshd@26-172.233.222.9:22-147.75.109.163:41094.service - OpenSSH per-connection server daemon (147.75.109.163:41094). Aug 13 01:28:43.674122 sshd[4117]: Accepted publickey for core from 147.75.109.163 port 41094 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:28:43.675299 sshd-session[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:28:43.680498 systemd-logind[1515]: New session 27 of user core. Aug 13 01:28:43.685956 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 01:28:43.963112 sshd[4119]: Connection closed by 147.75.109.163 port 41094 Aug 13 01:28:43.963986 sshd-session[4117]: pam_unix(sshd:session): session closed for user core Aug 13 01:28:43.968416 systemd[1]: sshd@26-172.233.222.9:22-147.75.109.163:41094.service: Deactivated successfully. Aug 13 01:28:43.970425 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 01:28:43.971328 systemd-logind[1515]: Session 27 logged out. Waiting for processes to exit. Aug 13 01:28:43.972981 systemd-logind[1515]: Removed session 27. Aug 13 01:28:49.029134 systemd[1]: Started sshd@27-172.233.222.9:22-147.75.109.163:47370.service - OpenSSH per-connection server daemon (147.75.109.163:47370). Aug 13 01:28:49.375918 sshd[4131]: Accepted publickey for core from 147.75.109.163 port 47370 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:28:49.377646 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:28:49.382851 systemd-logind[1515]: New session 28 of user core. Aug 13 01:28:49.386913 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 01:28:49.670052 sshd[4133]: Connection closed by 147.75.109.163 port 47370 Aug 13 01:28:49.670687 sshd-session[4131]: pam_unix(sshd:session): session closed for user core Aug 13 01:28:49.675062 systemd-logind[1515]: Session 28 logged out. Waiting for processes to exit. Aug 13 01:28:49.675484 systemd[1]: sshd@27-172.233.222.9:22-147.75.109.163:47370.service: Deactivated successfully. Aug 13 01:28:49.677446 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 01:28:49.679211 systemd-logind[1515]: Removed session 28. Aug 13 01:28:54.730854 systemd[1]: Started sshd@28-172.233.222.9:22-147.75.109.163:47372.service - OpenSSH per-connection server daemon (147.75.109.163:47372). Aug 13 01:28:55.064274 sshd[4144]: Accepted publickey for core from 147.75.109.163 port 47372 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:28:55.066143 sshd-session[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:28:55.071009 systemd-logind[1515]: New session 29 of user core. Aug 13 01:28:55.076104 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 13 01:28:55.355847 sshd[4148]: Connection closed by 147.75.109.163 port 47372 Aug 13 01:28:55.356944 sshd-session[4144]: pam_unix(sshd:session): session closed for user core Aug 13 01:28:55.362234 systemd-logind[1515]: Session 29 logged out. Waiting for processes to exit. Aug 13 01:28:55.362553 systemd[1]: sshd@28-172.233.222.9:22-147.75.109.163:47372.service: Deactivated successfully. Aug 13 01:28:55.364599 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 01:28:55.366582 systemd-logind[1515]: Removed session 29. Aug 13 01:28:58.449426 kubelet[2702]: E0813 01:28:58.449340 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:29:00.421993 systemd[1]: Started sshd@29-172.233.222.9:22-147.75.109.163:39122.service - OpenSSH per-connection server daemon (147.75.109.163:39122). Aug 13 01:29:00.756920 sshd[4163]: Accepted publickey for core from 147.75.109.163 port 39122 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:29:00.758861 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:29:00.764045 systemd-logind[1515]: New session 30 of user core. Aug 13 01:29:00.769910 systemd[1]: Started session-30.scope - Session 30 of User core. Aug 13 01:29:01.050017 sshd[4166]: Connection closed by 147.75.109.163 port 39122 Aug 13 01:29:01.051061 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Aug 13 01:29:01.055487 systemd[1]: sshd@29-172.233.222.9:22-147.75.109.163:39122.service: Deactivated successfully. Aug 13 01:29:01.057836 systemd[1]: session-30.scope: Deactivated successfully. Aug 13 01:29:01.059161 systemd-logind[1515]: Session 30 logged out. Waiting for processes to exit. Aug 13 01:29:01.060276 systemd-logind[1515]: Removed session 30. Aug 13 01:29:02.449393 kubelet[2702]: E0813 01:29:02.449342 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:29:06.119002 systemd[1]: Started sshd@30-172.233.222.9:22-147.75.109.163:39132.service - OpenSSH per-connection server daemon (147.75.109.163:39132). Aug 13 01:29:06.459395 sshd[4178]: Accepted publickey for core from 147.75.109.163 port 39132 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:29:06.461106 sshd-session[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:29:06.466856 systemd-logind[1515]: New session 31 of user core. Aug 13 01:29:06.473916 systemd[1]: Started session-31.scope - Session 31 of User core. Aug 13 01:29:06.762254 sshd[4180]: Connection closed by 147.75.109.163 port 39132 Aug 13 01:29:06.764012 sshd-session[4178]: pam_unix(sshd:session): session closed for user core Aug 13 01:29:06.768707 systemd-logind[1515]: Session 31 logged out. Waiting for processes to exit. Aug 13 01:29:06.769588 systemd[1]: sshd@30-172.233.222.9:22-147.75.109.163:39132.service: Deactivated successfully. Aug 13 01:29:06.771749 systemd[1]: session-31.scope: Deactivated successfully. Aug 13 01:29:06.773943 systemd-logind[1515]: Removed session 31. Aug 13 01:29:07.448818 kubelet[2702]: E0813 01:29:07.448563 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:29:11.827977 systemd[1]: Started sshd@31-172.233.222.9:22-147.75.109.163:57040.service - OpenSSH per-connection server daemon (147.75.109.163:57040). Aug 13 01:29:12.166008 sshd[4192]: Accepted publickey for core from 147.75.109.163 port 57040 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:29:12.168063 sshd-session[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:29:12.173925 systemd-logind[1515]: New session 32 of user core. Aug 13 01:29:12.178912 systemd[1]: Started session-32.scope - Session 32 of User core. Aug 13 01:29:12.469471 sshd[4194]: Connection closed by 147.75.109.163 port 57040 Aug 13 01:29:12.470419 sshd-session[4192]: pam_unix(sshd:session): session closed for user core Aug 13 01:29:12.475535 systemd[1]: sshd@31-172.233.222.9:22-147.75.109.163:57040.service: Deactivated successfully. Aug 13 01:29:12.478271 systemd[1]: session-32.scope: Deactivated successfully. Aug 13 01:29:12.479870 systemd-logind[1515]: Session 32 logged out. Waiting for processes to exit. Aug 13 01:29:12.481962 systemd-logind[1515]: Removed session 32. Aug 13 01:29:17.537956 systemd[1]: Started sshd@32-172.233.222.9:22-147.75.109.163:57048.service - OpenSSH per-connection server daemon (147.75.109.163:57048). Aug 13 01:29:17.878444 sshd[4205]: Accepted publickey for core from 147.75.109.163 port 57048 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:29:17.879718 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:29:17.887250 systemd-logind[1515]: New session 33 of user core. Aug 13 01:29:17.890897 systemd[1]: Started session-33.scope - Session 33 of User core. Aug 13 01:29:18.161650 sshd[4207]: Connection closed by 147.75.109.163 port 57048 Aug 13 01:29:18.162729 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Aug 13 01:29:18.168246 systemd-logind[1515]: Session 33 logged out. Waiting for processes to exit. Aug 13 01:29:18.169215 systemd[1]: sshd@32-172.233.222.9:22-147.75.109.163:57048.service: Deactivated successfully. Aug 13 01:29:18.171744 systemd[1]: session-33.scope: Deactivated successfully. Aug 13 01:29:18.174591 systemd-logind[1515]: Removed session 33. Aug 13 01:29:18.449102 kubelet[2702]: E0813 01:29:18.448730 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:29:19.449314 kubelet[2702]: E0813 01:29:19.448873 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:29:23.228301 systemd[1]: Started sshd@33-172.233.222.9:22-147.75.109.163:56074.service - OpenSSH per-connection server daemon (147.75.109.163:56074). Aug 13 01:29:23.575835 sshd[4221]: Accepted publickey for core from 147.75.109.163 port 56074 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:29:23.576492 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:29:23.581510 systemd-logind[1515]: New session 34 of user core. Aug 13 01:29:23.586936 systemd[1]: Started session-34.scope - Session 34 of User core. Aug 13 01:29:23.879302 sshd[4223]: Connection closed by 147.75.109.163 port 56074 Aug 13 01:29:23.880998 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Aug 13 01:29:23.885717 systemd[1]: sshd@33-172.233.222.9:22-147.75.109.163:56074.service: Deactivated successfully. Aug 13 01:29:23.888436 systemd[1]: session-34.scope: Deactivated successfully. Aug 13 01:29:23.889431 systemd-logind[1515]: Session 34 logged out. Waiting for processes to exit. Aug 13 01:29:23.890967 systemd-logind[1515]: Removed session 34. Aug 13 01:29:28.942098 systemd[1]: Started sshd@34-172.233.222.9:22-147.75.109.163:42584.service - OpenSSH per-connection server daemon (147.75.109.163:42584). Aug 13 01:29:29.284907 sshd[4236]: Accepted publickey for core from 147.75.109.163 port 42584 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:29:29.286568 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:29:29.290890 systemd-logind[1515]: New session 35 of user core. Aug 13 01:29:29.293909 systemd[1]: Started session-35.scope - Session 35 of User core. Aug 13 01:29:29.588069 sshd[4238]: Connection closed by 147.75.109.163 port 42584 Aug 13 01:29:29.589037 sshd-session[4236]: pam_unix(sshd:session): session closed for user core Aug 13 01:29:29.592988 systemd[1]: sshd@34-172.233.222.9:22-147.75.109.163:42584.service: Deactivated successfully. Aug 13 01:29:29.595259 systemd[1]: session-35.scope: Deactivated successfully. Aug 13 01:29:29.596176 systemd-logind[1515]: Session 35 logged out. Waiting for processes to exit. Aug 13 01:29:29.598177 systemd-logind[1515]: Removed session 35. Aug 13 01:29:31.449560 kubelet[2702]: E0813 01:29:31.449147 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:29:34.652532 systemd[1]: Started sshd@35-172.233.222.9:22-147.75.109.163:42596.service - OpenSSH per-connection server daemon (147.75.109.163:42596). Aug 13 01:29:34.979699 sshd[4249]: Accepted publickey for core from 147.75.109.163 port 42596 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:29:34.981061 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:29:34.985234 systemd-logind[1515]: New session 36 of user core. Aug 13 01:29:34.991906 systemd[1]: Started session-36.scope - Session 36 of User core. Aug 13 01:29:35.272484 sshd[4251]: Connection closed by 147.75.109.163 port 42596 Aug 13 01:29:35.273972 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Aug 13 01:29:35.278252 systemd-logind[1515]: Session 36 logged out. Waiting for processes to exit. Aug 13 01:29:35.278456 systemd[1]: sshd@35-172.233.222.9:22-147.75.109.163:42596.service: Deactivated successfully. Aug 13 01:29:35.280489 systemd[1]: session-36.scope: Deactivated successfully. Aug 13 01:29:35.282397 systemd-logind[1515]: Removed session 36. Aug 13 01:29:40.334076 systemd[1]: Started sshd@36-172.233.222.9:22-147.75.109.163:55910.service - OpenSSH per-connection server daemon (147.75.109.163:55910). Aug 13 01:29:40.678244 sshd[4262]: Accepted publickey for core from 147.75.109.163 port 55910 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:29:40.679663 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:29:40.684430 systemd-logind[1515]: New session 37 of user core. Aug 13 01:29:40.696928 systemd[1]: Started session-37.scope - Session 37 of User core. Aug 13 01:29:40.972085 sshd[4264]: Connection closed by 147.75.109.163 port 55910 Aug 13 01:29:40.972699 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Aug 13 01:29:40.977346 systemd-logind[1515]: Session 37 logged out. Waiting for processes to exit. Aug 13 01:29:40.978162 systemd[1]: sshd@36-172.233.222.9:22-147.75.109.163:55910.service: Deactivated successfully. Aug 13 01:29:40.980726 systemd[1]: session-37.scope: Deactivated successfully. Aug 13 01:29:40.982585 systemd-logind[1515]: Removed session 37. Aug 13 01:29:46.035307 systemd[1]: Started sshd@37-172.233.222.9:22-147.75.109.163:55924.service - OpenSSH per-connection server daemon (147.75.109.163:55924). Aug 13 01:29:46.373891 sshd[4276]: Accepted publickey for core from 147.75.109.163 port 55924 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:29:46.375175 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:29:46.380230 systemd-logind[1515]: New session 38 of user core. Aug 13 01:29:46.384911 systemd[1]: Started session-38.scope - Session 38 of User core. Aug 13 01:29:46.667370 sshd[4278]: Connection closed by 147.75.109.163 port 55924 Aug 13 01:29:46.668040 sshd-session[4276]: pam_unix(sshd:session): session closed for user core Aug 13 01:29:46.672107 systemd-logind[1515]: Session 38 logged out. Waiting for processes to exit. Aug 13 01:29:46.673003 systemd[1]: sshd@37-172.233.222.9:22-147.75.109.163:55924.service: Deactivated successfully. Aug 13 01:29:46.675075 systemd[1]: session-38.scope: Deactivated successfully. Aug 13 01:29:46.676496 systemd-logind[1515]: Removed session 38. Aug 13 01:29:51.729261 systemd[1]: Started sshd@38-172.233.222.9:22-147.75.109.163:46238.service - OpenSSH per-connection server daemon (147.75.109.163:46238). Aug 13 01:29:52.070101 sshd[4290]: Accepted publickey for core from 147.75.109.163 port 46238 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:29:52.071285 sshd-session[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:29:52.076610 systemd-logind[1515]: New session 39 of user core. Aug 13 01:29:52.085919 systemd[1]: Started session-39.scope - Session 39 of User core. Aug 13 01:29:52.371191 sshd[4292]: Connection closed by 147.75.109.163 port 46238 Aug 13 01:29:52.372108 sshd-session[4290]: pam_unix(sshd:session): session closed for user core Aug 13 01:29:52.376519 systemd[1]: sshd@38-172.233.222.9:22-147.75.109.163:46238.service: Deactivated successfully. Aug 13 01:29:52.378903 systemd[1]: session-39.scope: Deactivated successfully. Aug 13 01:29:52.379758 systemd-logind[1515]: Session 39 logged out. Waiting for processes to exit. Aug 13 01:29:52.382086 systemd-logind[1515]: Removed session 39. Aug 13 01:29:57.441910 systemd[1]: Started sshd@39-172.233.222.9:22-147.75.109.163:46252.service - OpenSSH per-connection server daemon (147.75.109.163:46252). Aug 13 01:29:57.777864 sshd[4307]: Accepted publickey for core from 147.75.109.163 port 46252 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:29:57.779584 sshd-session[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:29:57.784519 systemd-logind[1515]: New session 40 of user core. Aug 13 01:29:57.790921 systemd[1]: Started session-40.scope - Session 40 of User core. Aug 13 01:29:58.069098 sshd[4309]: Connection closed by 147.75.109.163 port 46252 Aug 13 01:29:58.071005 sshd-session[4307]: pam_unix(sshd:session): session closed for user core Aug 13 01:29:58.074895 systemd[1]: sshd@39-172.233.222.9:22-147.75.109.163:46252.service: Deactivated successfully. Aug 13 01:29:58.077070 systemd[1]: session-40.scope: Deactivated successfully. Aug 13 01:29:58.078208 systemd-logind[1515]: Session 40 logged out. Waiting for processes to exit. Aug 13 01:29:58.079814 systemd-logind[1515]: Removed session 40. Aug 13 01:30:00.448560 kubelet[2702]: E0813 01:30:00.448517 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:30:03.140128 systemd[1]: Started sshd@40-172.233.222.9:22-147.75.109.163:34362.service - OpenSSH per-connection server daemon (147.75.109.163:34362). Aug 13 01:30:03.471227 sshd[4321]: Accepted publickey for core from 147.75.109.163 port 34362 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:30:03.473143 sshd-session[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:30:03.478869 systemd-logind[1515]: New session 41 of user core. Aug 13 01:30:03.484991 systemd[1]: Started session-41.scope - Session 41 of User core. Aug 13 01:30:03.763481 sshd[4323]: Connection closed by 147.75.109.163 port 34362 Aug 13 01:30:03.764404 sshd-session[4321]: pam_unix(sshd:session): session closed for user core Aug 13 01:30:03.769625 systemd[1]: sshd@40-172.233.222.9:22-147.75.109.163:34362.service: Deactivated successfully. Aug 13 01:30:03.773054 systemd[1]: session-41.scope: Deactivated successfully. Aug 13 01:30:03.773919 systemd-logind[1515]: Session 41 logged out. Waiting for processes to exit. Aug 13 01:30:03.776085 systemd-logind[1515]: Removed session 41. Aug 13 01:30:08.839984 systemd[1]: Started sshd@41-172.233.222.9:22-147.75.109.163:43154.service - OpenSSH per-connection server daemon (147.75.109.163:43154). Aug 13 01:30:09.171509 sshd[4335]: Accepted publickey for core from 147.75.109.163 port 43154 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:30:09.172929 sshd-session[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:30:09.178591 systemd-logind[1515]: New session 42 of user core. Aug 13 01:30:09.182921 systemd[1]: Started session-42.scope - Session 42 of User core. Aug 13 01:30:09.475641 sshd[4337]: Connection closed by 147.75.109.163 port 43154 Aug 13 01:30:09.476271 sshd-session[4335]: pam_unix(sshd:session): session closed for user core Aug 13 01:30:09.481267 systemd-logind[1515]: Session 42 logged out. Waiting for processes to exit. Aug 13 01:30:09.482083 systemd[1]: sshd@41-172.233.222.9:22-147.75.109.163:43154.service: Deactivated successfully. Aug 13 01:30:09.484465 systemd[1]: session-42.scope: Deactivated successfully. Aug 13 01:30:09.486464 systemd-logind[1515]: Removed session 42. Aug 13 01:30:10.449048 kubelet[2702]: E0813 01:30:10.449013 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:30:14.448738 kubelet[2702]: E0813 01:30:14.448704 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:30:14.539746 systemd[1]: Started sshd@42-172.233.222.9:22-147.75.109.163:43162.service - OpenSSH per-connection server daemon (147.75.109.163:43162). Aug 13 01:30:14.884932 sshd[4348]: Accepted publickey for core from 147.75.109.163 port 43162 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:30:14.886765 sshd-session[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:30:14.893046 systemd-logind[1515]: New session 43 of user core. Aug 13 01:30:14.899917 systemd[1]: Started session-43.scope - Session 43 of User core. Aug 13 01:30:15.184318 sshd[4350]: Connection closed by 147.75.109.163 port 43162 Aug 13 01:30:15.185144 sshd-session[4348]: pam_unix(sshd:session): session closed for user core Aug 13 01:30:15.189380 systemd[1]: sshd@42-172.233.222.9:22-147.75.109.163:43162.service: Deactivated successfully. Aug 13 01:30:15.191281 systemd[1]: session-43.scope: Deactivated successfully. Aug 13 01:30:15.192150 systemd-logind[1515]: Session 43 logged out. Waiting for processes to exit. Aug 13 01:30:15.193687 systemd-logind[1515]: Removed session 43. Aug 13 01:30:15.899525 containerd[1550]: time="2025-08-13T01:30:15.899433128Z" level=warning msg="container event discarded" container=cde1a6b86b2923227367e578499d2f802d788d6f2b3740462bd335d95344e156 type=CONTAINER_CREATED_EVENT Aug 13 01:30:15.910773 containerd[1550]: time="2025-08-13T01:30:15.910743544Z" level=warning msg="container event discarded" container=cde1a6b86b2923227367e578499d2f802d788d6f2b3740462bd335d95344e156 type=CONTAINER_STARTED_EVENT Aug 13 01:30:15.910773 containerd[1550]: time="2025-08-13T01:30:15.910766564Z" level=warning msg="container event discarded" container=b1e4c9c1af11dec3c0956b506fe5793d688584c2f8922c730d7bc03d27e63c5c type=CONTAINER_CREATED_EVENT Aug 13 01:30:15.910773 containerd[1550]: time="2025-08-13T01:30:15.910775404Z" level=warning msg="container event discarded" container=b1e4c9c1af11dec3c0956b506fe5793d688584c2f8922c730d7bc03d27e63c5c type=CONTAINER_STARTED_EVENT Aug 13 01:30:15.921986 containerd[1550]: time="2025-08-13T01:30:15.921963840Z" level=warning msg="container event discarded" container=26cf5d563ef3713738c6a5b037438a329a15fdff2e15c9c100de6ba4dcb84361 type=CONTAINER_CREATED_EVENT Aug 13 01:30:15.921986 containerd[1550]: time="2025-08-13T01:30:15.921983190Z" level=warning msg="container event discarded" container=26cf5d563ef3713738c6a5b037438a329a15fdff2e15c9c100de6ba4dcb84361 type=CONTAINER_STARTED_EVENT Aug 13 01:30:15.922061 containerd[1550]: time="2025-08-13T01:30:15.921991150Z" level=warning msg="container event discarded" container=9c5cc4b5d88c4c218ae78d997e52330d639617716a94378a6864073de9def0ea type=CONTAINER_CREATED_EVENT Aug 13 01:30:15.938282 containerd[1550]: time="2025-08-13T01:30:15.938254088Z" level=warning msg="container event discarded" container=f72cc72f6e7d439f5afc1dfe36783ed30929f906b589658b5e88e2eace1a4ab8 type=CONTAINER_CREATED_EVENT Aug 13 01:30:15.938282 containerd[1550]: time="2025-08-13T01:30:15.938274898Z" level=warning msg="container event discarded" container=188e7bcfb7ecf8d6df0c8641e0d9cea267a47cf0698becf5ce7e9d68550634ef type=CONTAINER_CREATED_EVENT Aug 13 01:30:16.028713 containerd[1550]: time="2025-08-13T01:30:16.028670774Z" level=warning msg="container event discarded" container=f72cc72f6e7d439f5afc1dfe36783ed30929f906b589658b5e88e2eace1a4ab8 type=CONTAINER_STARTED_EVENT Aug 13 01:30:16.028713 containerd[1550]: time="2025-08-13T01:30:16.028701554Z" level=warning msg="container event discarded" container=9c5cc4b5d88c4c218ae78d997e52330d639617716a94378a6864073de9def0ea type=CONTAINER_STARTED_EVENT Aug 13 01:30:16.045927 containerd[1550]: time="2025-08-13T01:30:16.045895347Z" level=warning msg="container event discarded" container=188e7bcfb7ecf8d6df0c8641e0d9cea267a47cf0698becf5ce7e9d68550634ef type=CONTAINER_STARTED_EVENT Aug 13 01:30:20.248578 systemd[1]: Started sshd@43-172.233.222.9:22-147.75.109.163:58250.service - OpenSSH per-connection server daemon (147.75.109.163:58250). Aug 13 01:30:20.589024 sshd[4365]: Accepted publickey for core from 147.75.109.163 port 58250 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:30:20.590256 sshd-session[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:30:20.595174 systemd-logind[1515]: New session 44 of user core. Aug 13 01:30:20.601895 systemd[1]: Started session-44.scope - Session 44 of User core. Aug 13 01:30:20.886571 sshd[4367]: Connection closed by 147.75.109.163 port 58250 Aug 13 01:30:20.887321 sshd-session[4365]: pam_unix(sshd:session): session closed for user core Aug 13 01:30:20.891240 systemd-logind[1515]: Session 44 logged out. Waiting for processes to exit. Aug 13 01:30:20.892014 systemd[1]: sshd@43-172.233.222.9:22-147.75.109.163:58250.service: Deactivated successfully. Aug 13 01:30:20.894206 systemd[1]: session-44.scope: Deactivated successfully. Aug 13 01:30:20.895526 systemd-logind[1515]: Removed session 44. Aug 13 01:30:21.448725 kubelet[2702]: E0813 01:30:21.448537 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:30:25.948718 systemd[1]: Started sshd@44-172.233.222.9:22-147.75.109.163:58258.service - OpenSSH per-connection server daemon (147.75.109.163:58258). Aug 13 01:30:26.104868 containerd[1550]: time="2025-08-13T01:30:26.104809337Z" level=warning msg="container event discarded" container=4634859afd0297fdd9a1630912eabed6e072606ce00f7ccc8753d69a66504c55 type=CONTAINER_CREATED_EVENT Aug 13 01:30:26.104868 containerd[1550]: time="2025-08-13T01:30:26.104849537Z" level=warning msg="container event discarded" container=4634859afd0297fdd9a1630912eabed6e072606ce00f7ccc8753d69a66504c55 type=CONTAINER_STARTED_EVENT Aug 13 01:30:26.138434 containerd[1550]: time="2025-08-13T01:30:26.138096172Z" level=warning msg="container event discarded" container=3485339ca1f1baaec5fb5c4b15e045c740a34b56e27b552bfe27064d6b74fd84 type=CONTAINER_CREATED_EVENT Aug 13 01:30:26.163720 containerd[1550]: time="2025-08-13T01:30:26.163672946Z" level=warning msg="container event discarded" container=dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94 type=CONTAINER_CREATED_EVENT Aug 13 01:30:26.163720 containerd[1550]: time="2025-08-13T01:30:26.163696216Z" level=warning msg="container event discarded" container=dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94 type=CONTAINER_STARTED_EVENT Aug 13 01:30:26.206006 containerd[1550]: time="2025-08-13T01:30:26.205886447Z" level=warning msg="container event discarded" container=3485339ca1f1baaec5fb5c4b15e045c740a34b56e27b552bfe27064d6b74fd84 type=CONTAINER_STARTED_EVENT Aug 13 01:30:26.293827 sshd[4379]: Accepted publickey for core from 147.75.109.163 port 58258 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:30:26.295046 sshd-session[4379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:30:26.299849 systemd-logind[1515]: New session 45 of user core. Aug 13 01:30:26.302978 systemd[1]: Started session-45.scope - Session 45 of User core. Aug 13 01:30:26.309299 containerd[1550]: time="2025-08-13T01:30:26.309260867Z" level=warning msg="container event discarded" container=4ab2a630fec69e190ef075275cfed65ff6981ecfd052af7baf03bceeb7051f59 type=CONTAINER_CREATED_EVENT Aug 13 01:30:26.309299 containerd[1550]: time="2025-08-13T01:30:26.309289127Z" level=warning msg="container event discarded" container=4ab2a630fec69e190ef075275cfed65ff6981ecfd052af7baf03bceeb7051f59 type=CONTAINER_STARTED_EVENT Aug 13 01:30:26.593063 sshd[4381]: Connection closed by 147.75.109.163 port 58258 Aug 13 01:30:26.593643 sshd-session[4379]: pam_unix(sshd:session): session closed for user core Aug 13 01:30:26.597848 systemd-logind[1515]: Session 45 logged out. Waiting for processes to exit. Aug 13 01:30:26.598456 systemd[1]: sshd@44-172.233.222.9:22-147.75.109.163:58258.service: Deactivated successfully. Aug 13 01:30:26.601306 systemd[1]: session-45.scope: Deactivated successfully. Aug 13 01:30:26.602713 systemd-logind[1515]: Removed session 45. Aug 13 01:30:29.448698 kubelet[2702]: E0813 01:30:29.448617 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:30:30.132090 containerd[1550]: time="2025-08-13T01:30:30.132040156Z" level=warning msg="container event discarded" container=05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7 type=CONTAINER_CREATED_EVENT Aug 13 01:30:30.183675 containerd[1550]: time="2025-08-13T01:30:30.183637053Z" level=warning msg="container event discarded" container=05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7 type=CONTAINER_STARTED_EVENT Aug 13 01:30:30.282841 containerd[1550]: time="2025-08-13T01:30:30.282747635Z" level=warning msg="container event discarded" container=05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7 type=CONTAINER_STOPPED_EVENT Aug 13 01:30:30.503266 containerd[1550]: time="2025-08-13T01:30:30.503229693Z" level=warning msg="container event discarded" container=3abbfe7bfd3c944fb15baa2e5a6dc843eefe80dbaba32fbe00e34de8168cba85 type=CONTAINER_CREATED_EVENT Aug 13 01:30:30.550477 containerd[1550]: time="2025-08-13T01:30:30.550434208Z" level=warning msg="container event discarded" container=3abbfe7bfd3c944fb15baa2e5a6dc843eefe80dbaba32fbe00e34de8168cba85 type=CONTAINER_STARTED_EVENT Aug 13 01:30:30.593662 containerd[1550]: time="2025-08-13T01:30:30.593638146Z" level=warning msg="container event discarded" container=3abbfe7bfd3c944fb15baa2e5a6dc843eefe80dbaba32fbe00e34de8168cba85 type=CONTAINER_STOPPED_EVENT Aug 13 01:30:31.122093 containerd[1550]: time="2025-08-13T01:30:31.122030840Z" level=warning msg="container event discarded" container=e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4 type=CONTAINER_CREATED_EVENT Aug 13 01:30:31.167432 containerd[1550]: time="2025-08-13T01:30:31.167352552Z" level=warning msg="container event discarded" container=e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4 type=CONTAINER_STARTED_EVENT Aug 13 01:30:31.520876 containerd[1550]: time="2025-08-13T01:30:31.520822789Z" level=warning msg="container event discarded" container=2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7 type=CONTAINER_CREATED_EVENT Aug 13 01:30:31.590138 containerd[1550]: time="2025-08-13T01:30:31.590104382Z" level=warning msg="container event discarded" container=2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7 type=CONTAINER_STARTED_EVENT Aug 13 01:30:31.644359 containerd[1550]: time="2025-08-13T01:30:31.644327571Z" level=warning msg="container event discarded" container=2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7 type=CONTAINER_STOPPED_EVENT Aug 13 01:30:31.652197 systemd[1]: Started sshd@45-172.233.222.9:22-147.75.109.163:37018.service - OpenSSH per-connection server daemon (147.75.109.163:37018). Aug 13 01:30:31.992734 sshd[4395]: Accepted publickey for core from 147.75.109.163 port 37018 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:30:31.993269 sshd-session[4395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:30:31.997852 systemd-logind[1515]: New session 46 of user core. Aug 13 01:30:32.009916 systemd[1]: Started session-46.scope - Session 46 of User core. Aug 13 01:30:32.287267 sshd[4397]: Connection closed by 147.75.109.163 port 37018 Aug 13 01:30:32.289252 sshd-session[4395]: pam_unix(sshd:session): session closed for user core Aug 13 01:30:32.293412 systemd[1]: sshd@45-172.233.222.9:22-147.75.109.163:37018.service: Deactivated successfully. Aug 13 01:30:32.296143 systemd[1]: session-46.scope: Deactivated successfully. Aug 13 01:30:32.297199 systemd-logind[1515]: Session 46 logged out. Waiting for processes to exit. Aug 13 01:30:32.299482 systemd-logind[1515]: Removed session 46. Aug 13 01:30:32.519672 containerd[1550]: time="2025-08-13T01:30:32.519580926Z" level=warning msg="container event discarded" container=c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc type=CONTAINER_CREATED_EVENT Aug 13 01:30:32.566386 containerd[1550]: time="2025-08-13T01:30:32.566276173Z" level=warning msg="container event discarded" container=c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc type=CONTAINER_STARTED_EVENT Aug 13 01:30:32.587505 containerd[1550]: time="2025-08-13T01:30:32.587482364Z" level=warning msg="container event discarded" container=c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc type=CONTAINER_STOPPED_EVENT Aug 13 01:30:33.519963 containerd[1550]: time="2025-08-13T01:30:33.519903887Z" level=warning msg="container event discarded" container=0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320 type=CONTAINER_CREATED_EVENT Aug 13 01:30:33.575192 containerd[1550]: time="2025-08-13T01:30:33.575154694Z" level=warning msg="container event discarded" container=0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320 type=CONTAINER_STARTED_EVENT Aug 13 01:30:37.355538 systemd[1]: Started sshd@46-172.233.222.9:22-147.75.109.163:37028.service - OpenSSH per-connection server daemon (147.75.109.163:37028). Aug 13 01:30:37.701361 sshd[4410]: Accepted publickey for core from 147.75.109.163 port 37028 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:30:37.701838 sshd-session[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:30:37.706277 systemd-logind[1515]: New session 47 of user core. Aug 13 01:30:37.715908 systemd[1]: Started session-47.scope - Session 47 of User core. Aug 13 01:30:37.999894 sshd[4412]: Connection closed by 147.75.109.163 port 37028 Aug 13 01:30:38.000512 sshd-session[4410]: pam_unix(sshd:session): session closed for user core Aug 13 01:30:38.005172 systemd[1]: sshd@46-172.233.222.9:22-147.75.109.163:37028.service: Deactivated successfully. Aug 13 01:30:38.007921 systemd[1]: session-47.scope: Deactivated successfully. Aug 13 01:30:38.009356 systemd-logind[1515]: Session 47 logged out. Waiting for processes to exit. Aug 13 01:30:38.010913 systemd-logind[1515]: Removed session 47. Aug 13 01:30:43.067966 systemd[1]: Started sshd@47-172.233.222.9:22-147.75.109.163:48954.service - OpenSSH per-connection server daemon (147.75.109.163:48954). Aug 13 01:30:43.410887 sshd[4424]: Accepted publickey for core from 147.75.109.163 port 48954 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:30:43.411874 sshd-session[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:30:43.417491 systemd-logind[1515]: New session 48 of user core. Aug 13 01:30:43.427940 systemd[1]: Started session-48.scope - Session 48 of User core. Aug 13 01:30:43.717666 sshd[4426]: Connection closed by 147.75.109.163 port 48954 Aug 13 01:30:43.718522 sshd-session[4424]: pam_unix(sshd:session): session closed for user core Aug 13 01:30:43.723047 systemd-logind[1515]: Session 48 logged out. Waiting for processes to exit. Aug 13 01:30:43.723969 systemd[1]: sshd@47-172.233.222.9:22-147.75.109.163:48954.service: Deactivated successfully. Aug 13 01:30:43.725989 systemd[1]: session-48.scope: Deactivated successfully. Aug 13 01:30:43.728277 systemd-logind[1515]: Removed session 48. Aug 13 01:30:48.781627 systemd[1]: Started sshd@48-172.233.222.9:22-147.75.109.163:34616.service - OpenSSH per-connection server daemon (147.75.109.163:34616). Aug 13 01:30:49.115982 sshd[4438]: Accepted publickey for core from 147.75.109.163 port 34616 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:30:49.117668 sshd-session[4438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:30:49.122857 systemd-logind[1515]: New session 49 of user core. Aug 13 01:30:49.127909 systemd[1]: Started session-49.scope - Session 49 of User core. Aug 13 01:30:49.406951 sshd[4440]: Connection closed by 147.75.109.163 port 34616 Aug 13 01:30:49.407981 sshd-session[4438]: pam_unix(sshd:session): session closed for user core Aug 13 01:30:49.411292 systemd[1]: sshd@48-172.233.222.9:22-147.75.109.163:34616.service: Deactivated successfully. Aug 13 01:30:49.413532 systemd[1]: session-49.scope: Deactivated successfully. Aug 13 01:30:49.415122 systemd-logind[1515]: Session 49 logged out. Waiting for processes to exit. Aug 13 01:30:49.416703 systemd-logind[1515]: Removed session 49. Aug 13 01:30:54.470393 systemd[1]: Started sshd@49-172.233.222.9:22-147.75.109.163:34620.service - OpenSSH per-connection server daemon (147.75.109.163:34620). Aug 13 01:30:54.810045 sshd[4452]: Accepted publickey for core from 147.75.109.163 port 34620 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:30:54.811298 sshd-session[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:30:54.815976 systemd-logind[1515]: New session 50 of user core. Aug 13 01:30:54.821901 systemd[1]: Started session-50.scope - Session 50 of User core. Aug 13 01:30:55.106643 sshd[4454]: Connection closed by 147.75.109.163 port 34620 Aug 13 01:30:55.108188 sshd-session[4452]: pam_unix(sshd:session): session closed for user core Aug 13 01:30:55.111074 systemd[1]: sshd@49-172.233.222.9:22-147.75.109.163:34620.service: Deactivated successfully. Aug 13 01:30:55.113055 systemd[1]: session-50.scope: Deactivated successfully. Aug 13 01:30:55.114655 systemd-logind[1515]: Session 50 logged out. Waiting for processes to exit. Aug 13 01:30:55.116194 systemd-logind[1515]: Removed session 50. Aug 13 01:30:59.449817 kubelet[2702]: E0813 01:30:59.449349 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:31:00.172201 systemd[1]: Started sshd@50-172.233.222.9:22-147.75.109.163:55654.service - OpenSSH per-connection server daemon (147.75.109.163:55654). Aug 13 01:31:00.507174 sshd[4468]: Accepted publickey for core from 147.75.109.163 port 55654 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:31:00.508673 sshd-session[4468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:31:00.517041 systemd-logind[1515]: New session 51 of user core. Aug 13 01:31:00.522002 systemd[1]: Started session-51.scope - Session 51 of User core. Aug 13 01:31:00.804148 sshd[4470]: Connection closed by 147.75.109.163 port 55654 Aug 13 01:31:00.804925 sshd-session[4468]: pam_unix(sshd:session): session closed for user core Aug 13 01:31:00.808855 systemd[1]: sshd@50-172.233.222.9:22-147.75.109.163:55654.service: Deactivated successfully. Aug 13 01:31:00.811249 systemd[1]: session-51.scope: Deactivated successfully. Aug 13 01:31:00.812201 systemd-logind[1515]: Session 51 logged out. Waiting for processes to exit. Aug 13 01:31:00.814062 systemd-logind[1515]: Removed session 51. Aug 13 01:31:05.870464 systemd[1]: Started sshd@51-172.233.222.9:22-147.75.109.163:55662.service - OpenSSH per-connection server daemon (147.75.109.163:55662). Aug 13 01:31:06.205918 sshd[4482]: Accepted publickey for core from 147.75.109.163 port 55662 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:31:06.207144 sshd-session[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:31:06.212651 systemd-logind[1515]: New session 52 of user core. Aug 13 01:31:06.219973 systemd[1]: Started session-52.scope - Session 52 of User core. Aug 13 01:31:06.502012 sshd[4484]: Connection closed by 147.75.109.163 port 55662 Aug 13 01:31:06.502561 sshd-session[4482]: pam_unix(sshd:session): session closed for user core Aug 13 01:31:06.506945 systemd-logind[1515]: Session 52 logged out. Waiting for processes to exit. Aug 13 01:31:06.507243 systemd[1]: sshd@51-172.233.222.9:22-147.75.109.163:55662.service: Deactivated successfully. Aug 13 01:31:06.509435 systemd[1]: session-52.scope: Deactivated successfully. Aug 13 01:31:06.511531 systemd-logind[1515]: Removed session 52. Aug 13 01:31:08.448869 kubelet[2702]: E0813 01:31:08.448808 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:31:11.579458 systemd[1]: Started sshd@52-172.233.222.9:22-147.75.109.163:46182.service - OpenSSH per-connection server daemon (147.75.109.163:46182). Aug 13 01:31:11.930881 sshd[4496]: Accepted publickey for core from 147.75.109.163 port 46182 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:31:11.931984 sshd-session[4496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:31:11.937016 systemd-logind[1515]: New session 53 of user core. Aug 13 01:31:11.941906 systemd[1]: Started session-53.scope - Session 53 of User core. Aug 13 01:31:12.230949 sshd[4498]: Connection closed by 147.75.109.163 port 46182 Aug 13 01:31:12.231966 sshd-session[4496]: pam_unix(sshd:session): session closed for user core Aug 13 01:31:12.235884 systemd[1]: sshd@52-172.233.222.9:22-147.75.109.163:46182.service: Deactivated successfully. Aug 13 01:31:12.237975 systemd[1]: session-53.scope: Deactivated successfully. Aug 13 01:31:12.239109 systemd-logind[1515]: Session 53 logged out. Waiting for processes to exit. Aug 13 01:31:12.240306 systemd-logind[1515]: Removed session 53. Aug 13 01:31:12.292057 systemd[1]: Started sshd@53-172.233.222.9:22-147.75.109.163:46196.service - OpenSSH per-connection server daemon (147.75.109.163:46196). Aug 13 01:31:12.637604 sshd[4509]: Accepted publickey for core from 147.75.109.163 port 46196 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:31:12.638124 sshd-session[4509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:31:12.643019 systemd-logind[1515]: New session 54 of user core. Aug 13 01:31:12.651916 systemd[1]: Started session-54.scope - Session 54 of User core. Aug 13 01:31:14.106913 containerd[1550]: time="2025-08-13T01:31:14.106871862Z" level=info msg="StopContainer for \"e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4\" with timeout 30 (s)" Aug 13 01:31:14.107818 containerd[1550]: time="2025-08-13T01:31:14.107768069Z" level=info msg="Stop container \"e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4\" with signal terminated" Aug 13 01:31:14.125120 systemd[1]: cri-containerd-e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4.scope: Deactivated successfully. Aug 13 01:31:14.127421 containerd[1550]: time="2025-08-13T01:31:14.127346323Z" level=info msg="received exit event container_id:\"e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4\" id:\"e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4\" pid:3221 exited_at:{seconds:1755048674 nanos:127083104}" Aug 13 01:31:14.128050 containerd[1550]: time="2025-08-13T01:31:14.127672131Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4\" id:\"e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4\" pid:3221 exited_at:{seconds:1755048674 nanos:127083104}" Aug 13 01:31:14.139472 containerd[1550]: time="2025-08-13T01:31:14.139398702Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:31:14.145757 containerd[1550]: time="2025-08-13T01:31:14.145709640Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320\" id:\"fd67662f9a8a7172730c7755c12f00bc8011a0bf5d3e3cdf6601ef405695f9b0\" pid:4537 exited_at:{seconds:1755048674 nanos:144917873}" Aug 13 01:31:14.148153 containerd[1550]: time="2025-08-13T01:31:14.148069992Z" level=info msg="StopContainer for \"0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320\" with timeout 2 (s)" Aug 13 01:31:14.148761 containerd[1550]: time="2025-08-13T01:31:14.148742850Z" level=info msg="Stop container \"0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320\" with signal terminated" Aug 13 01:31:14.156429 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4-rootfs.mount: Deactivated successfully. Aug 13 01:31:14.157807 systemd-networkd[1466]: lxc_health: Link DOWN Aug 13 01:31:14.157822 systemd-networkd[1466]: lxc_health: Lost carrier Aug 13 01:31:14.175098 systemd[1]: cri-containerd-0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320.scope: Deactivated successfully. Aug 13 01:31:14.175503 systemd[1]: cri-containerd-0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320.scope: Consumed 5.280s CPU time, 123.7M memory peak, 144K read from disk, 13.3M written to disk. Aug 13 01:31:14.179131 containerd[1550]: time="2025-08-13T01:31:14.179103767Z" level=info msg="received exit event container_id:\"0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320\" id:\"0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320\" pid:3331 exited_at:{seconds:1755048674 nanos:178924128}" Aug 13 01:31:14.179257 containerd[1550]: time="2025-08-13T01:31:14.179236247Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320\" id:\"0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320\" pid:3331 exited_at:{seconds:1755048674 nanos:178924128}" Aug 13 01:31:14.179492 containerd[1550]: time="2025-08-13T01:31:14.179478876Z" level=info msg="StopContainer for \"e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4\" returns successfully" Aug 13 01:31:14.179948 containerd[1550]: time="2025-08-13T01:31:14.179929634Z" level=info msg="StopPodSandbox for \"4ab2a630fec69e190ef075275cfed65ff6981ecfd052af7baf03bceeb7051f59\"" Aug 13 01:31:14.180011 containerd[1550]: time="2025-08-13T01:31:14.179973554Z" level=info msg="Container to stop \"e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:31:14.190584 systemd[1]: cri-containerd-4ab2a630fec69e190ef075275cfed65ff6981ecfd052af7baf03bceeb7051f59.scope: Deactivated successfully. Aug 13 01:31:14.193253 containerd[1550]: time="2025-08-13T01:31:14.193187140Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4ab2a630fec69e190ef075275cfed65ff6981ecfd052af7baf03bceeb7051f59\" id:\"4ab2a630fec69e190ef075275cfed65ff6981ecfd052af7baf03bceeb7051f59\" pid:2940 exit_status:137 exited_at:{seconds:1755048674 nanos:192833171}" Aug 13 01:31:14.206146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320-rootfs.mount: Deactivated successfully. Aug 13 01:31:14.213936 containerd[1550]: time="2025-08-13T01:31:14.213903689Z" level=info msg="StopContainer for \"0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320\" returns successfully" Aug 13 01:31:14.215285 containerd[1550]: time="2025-08-13T01:31:14.215264855Z" level=info msg="StopPodSandbox for \"dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94\"" Aug 13 01:31:14.215339 containerd[1550]: time="2025-08-13T01:31:14.215313915Z" level=info msg="Container to stop \"c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:31:14.215339 containerd[1550]: time="2025-08-13T01:31:14.215323795Z" level=info msg="Container to stop \"0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:31:14.215339 containerd[1550]: time="2025-08-13T01:31:14.215331335Z" level=info msg="Container to stop \"05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:31:14.215339 containerd[1550]: time="2025-08-13T01:31:14.215338985Z" level=info msg="Container to stop \"3abbfe7bfd3c944fb15baa2e5a6dc843eefe80dbaba32fbe00e34de8168cba85\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:31:14.215339 containerd[1550]: time="2025-08-13T01:31:14.215345775Z" level=info msg="Container to stop \"2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:31:14.231550 systemd[1]: cri-containerd-dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94.scope: Deactivated successfully. Aug 13 01:31:14.244958 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ab2a630fec69e190ef075275cfed65ff6981ecfd052af7baf03bceeb7051f59-rootfs.mount: Deactivated successfully. Aug 13 01:31:14.250803 containerd[1550]: time="2025-08-13T01:31:14.250753505Z" level=info msg="shim disconnected" id=4ab2a630fec69e190ef075275cfed65ff6981ecfd052af7baf03bceeb7051f59 namespace=k8s.io Aug 13 01:31:14.251036 containerd[1550]: time="2025-08-13T01:31:14.250971674Z" level=warning msg="cleaning up after shim disconnected" id=4ab2a630fec69e190ef075275cfed65ff6981ecfd052af7baf03bceeb7051f59 namespace=k8s.io Aug 13 01:31:14.251036 containerd[1550]: time="2025-08-13T01:31:14.250985694Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:31:14.264230 containerd[1550]: time="2025-08-13T01:31:14.264200929Z" level=info msg="received exit event sandbox_id:\"4ab2a630fec69e190ef075275cfed65ff6981ecfd052af7baf03bceeb7051f59\" exit_status:137 exited_at:{seconds:1755048674 nanos:192833171}" Aug 13 01:31:14.265902 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4ab2a630fec69e190ef075275cfed65ff6981ecfd052af7baf03bceeb7051f59-shm.mount: Deactivated successfully. Aug 13 01:31:14.266544 containerd[1550]: time="2025-08-13T01:31:14.266259052Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94\" id:\"dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94\" pid:2859 exit_status:137 exited_at:{seconds:1755048674 nanos:236866392}" Aug 13 01:31:14.266800 containerd[1550]: time="2025-08-13T01:31:14.266601942Z" level=info msg="TearDown network for sandbox \"4ab2a630fec69e190ef075275cfed65ff6981ecfd052af7baf03bceeb7051f59\" successfully" Aug 13 01:31:14.266800 containerd[1550]: time="2025-08-13T01:31:14.266624552Z" level=info msg="StopPodSandbox for \"4ab2a630fec69e190ef075275cfed65ff6981ecfd052af7baf03bceeb7051f59\" returns successfully" Aug 13 01:31:14.271395 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94-rootfs.mount: Deactivated successfully. Aug 13 01:31:14.275827 containerd[1550]: time="2025-08-13T01:31:14.275511921Z" level=info msg="shim disconnected" id=dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94 namespace=k8s.io Aug 13 01:31:14.275827 containerd[1550]: time="2025-08-13T01:31:14.275533271Z" level=warning msg="cleaning up after shim disconnected" id=dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94 namespace=k8s.io Aug 13 01:31:14.275827 containerd[1550]: time="2025-08-13T01:31:14.275540911Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:31:14.276629 containerd[1550]: time="2025-08-13T01:31:14.276046979Z" level=info msg="received exit event sandbox_id:\"dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94\" exit_status:137 exited_at:{seconds:1755048674 nanos:236866392}" Aug 13 01:31:14.281771 containerd[1550]: time="2025-08-13T01:31:14.281741560Z" level=info msg="TearDown network for sandbox \"dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94\" successfully" Aug 13 01:31:14.281894 containerd[1550]: time="2025-08-13T01:31:14.281864629Z" level=info msg="StopPodSandbox for \"dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94\" returns successfully" Aug 13 01:31:14.421821 kubelet[2702]: I0813 01:31:14.421156 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-host-proc-sys-net\") pod \"025e24fb-7026-4e6f-b2f2-17d07d390180\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " Aug 13 01:31:14.421821 kubelet[2702]: I0813 01:31:14.421182 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-host-proc-sys-kernel\") pod \"025e24fb-7026-4e6f-b2f2-17d07d390180\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " Aug 13 01:31:14.421821 kubelet[2702]: I0813 01:31:14.421202 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/025e24fb-7026-4e6f-b2f2-17d07d390180-clustermesh-secrets\") pod \"025e24fb-7026-4e6f-b2f2-17d07d390180\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " Aug 13 01:31:14.421821 kubelet[2702]: I0813 01:31:14.421217 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-cilium-run\") pod \"025e24fb-7026-4e6f-b2f2-17d07d390180\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " Aug 13 01:31:14.421821 kubelet[2702]: I0813 01:31:14.421232 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwmfd\" (UniqueName: \"kubernetes.io/projected/025e24fb-7026-4e6f-b2f2-17d07d390180-kube-api-access-rwmfd\") pod \"025e24fb-7026-4e6f-b2f2-17d07d390180\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " Aug 13 01:31:14.421821 kubelet[2702]: I0813 01:31:14.421246 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws7pb\" (UniqueName: \"kubernetes.io/projected/26a9811f-ff63-41d6-b219-d1ffa4ebedec-kube-api-access-ws7pb\") pod \"26a9811f-ff63-41d6-b219-d1ffa4ebedec\" (UID: \"26a9811f-ff63-41d6-b219-d1ffa4ebedec\") " Aug 13 01:31:14.422296 kubelet[2702]: I0813 01:31:14.421260 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/025e24fb-7026-4e6f-b2f2-17d07d390180-hubble-tls\") pod \"025e24fb-7026-4e6f-b2f2-17d07d390180\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " Aug 13 01:31:14.422296 kubelet[2702]: I0813 01:31:14.421273 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26a9811f-ff63-41d6-b219-d1ffa4ebedec-cilium-config-path\") pod \"26a9811f-ff63-41d6-b219-d1ffa4ebedec\" (UID: \"26a9811f-ff63-41d6-b219-d1ffa4ebedec\") " Aug 13 01:31:14.422296 kubelet[2702]: I0813 01:31:14.421286 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-xtables-lock\") pod \"025e24fb-7026-4e6f-b2f2-17d07d390180\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " Aug 13 01:31:14.422296 kubelet[2702]: I0813 01:31:14.421299 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-lib-modules\") pod \"025e24fb-7026-4e6f-b2f2-17d07d390180\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " Aug 13 01:31:14.422296 kubelet[2702]: I0813 01:31:14.421313 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-cilium-cgroup\") pod \"025e24fb-7026-4e6f-b2f2-17d07d390180\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " Aug 13 01:31:14.422296 kubelet[2702]: I0813 01:31:14.421325 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-hostproc\") pod \"025e24fb-7026-4e6f-b2f2-17d07d390180\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " Aug 13 01:31:14.422438 kubelet[2702]: I0813 01:31:14.421337 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-etc-cni-netd\") pod \"025e24fb-7026-4e6f-b2f2-17d07d390180\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " Aug 13 01:31:14.422438 kubelet[2702]: I0813 01:31:14.421351 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/025e24fb-7026-4e6f-b2f2-17d07d390180-cilium-config-path\") pod \"025e24fb-7026-4e6f-b2f2-17d07d390180\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " Aug 13 01:31:14.422438 kubelet[2702]: I0813 01:31:14.421363 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-cni-path\") pod \"025e24fb-7026-4e6f-b2f2-17d07d390180\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " Aug 13 01:31:14.422438 kubelet[2702]: I0813 01:31:14.421374 2702 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-bpf-maps\") pod \"025e24fb-7026-4e6f-b2f2-17d07d390180\" (UID: \"025e24fb-7026-4e6f-b2f2-17d07d390180\") " Aug 13 01:31:14.422438 kubelet[2702]: I0813 01:31:14.421412 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "025e24fb-7026-4e6f-b2f2-17d07d390180" (UID: "025e24fb-7026-4e6f-b2f2-17d07d390180"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:31:14.422438 kubelet[2702]: I0813 01:31:14.421439 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "025e24fb-7026-4e6f-b2f2-17d07d390180" (UID: "025e24fb-7026-4e6f-b2f2-17d07d390180"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:31:14.422568 kubelet[2702]: I0813 01:31:14.421452 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "025e24fb-7026-4e6f-b2f2-17d07d390180" (UID: "025e24fb-7026-4e6f-b2f2-17d07d390180"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:31:14.422568 kubelet[2702]: I0813 01:31:14.421632 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "025e24fb-7026-4e6f-b2f2-17d07d390180" (UID: "025e24fb-7026-4e6f-b2f2-17d07d390180"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:31:14.422568 kubelet[2702]: I0813 01:31:14.421657 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "025e24fb-7026-4e6f-b2f2-17d07d390180" (UID: "025e24fb-7026-4e6f-b2f2-17d07d390180"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:31:14.424134 kubelet[2702]: I0813 01:31:14.424106 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/025e24fb-7026-4e6f-b2f2-17d07d390180-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "025e24fb-7026-4e6f-b2f2-17d07d390180" (UID: "025e24fb-7026-4e6f-b2f2-17d07d390180"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 01:31:14.424184 kubelet[2702]: I0813 01:31:14.424145 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "025e24fb-7026-4e6f-b2f2-17d07d390180" (UID: "025e24fb-7026-4e6f-b2f2-17d07d390180"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:31:14.424184 kubelet[2702]: I0813 01:31:14.424160 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "025e24fb-7026-4e6f-b2f2-17d07d390180" (UID: "025e24fb-7026-4e6f-b2f2-17d07d390180"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:31:14.424184 kubelet[2702]: I0813 01:31:14.424173 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-hostproc" (OuterVolumeSpecName: "hostproc") pod "025e24fb-7026-4e6f-b2f2-17d07d390180" (UID: "025e24fb-7026-4e6f-b2f2-17d07d390180"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:31:14.424254 kubelet[2702]: I0813 01:31:14.424185 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "025e24fb-7026-4e6f-b2f2-17d07d390180" (UID: "025e24fb-7026-4e6f-b2f2-17d07d390180"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:31:14.425869 kubelet[2702]: I0813 01:31:14.425838 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/025e24fb-7026-4e6f-b2f2-17d07d390180-kube-api-access-rwmfd" (OuterVolumeSpecName: "kube-api-access-rwmfd") pod "025e24fb-7026-4e6f-b2f2-17d07d390180" (UID: "025e24fb-7026-4e6f-b2f2-17d07d390180"). InnerVolumeSpecName "kube-api-access-rwmfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:31:14.427461 kubelet[2702]: I0813 01:31:14.427434 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/025e24fb-7026-4e6f-b2f2-17d07d390180-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "025e24fb-7026-4e6f-b2f2-17d07d390180" (UID: "025e24fb-7026-4e6f-b2f2-17d07d390180"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 01:31:14.427544 kubelet[2702]: I0813 01:31:14.427467 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-cni-path" (OuterVolumeSpecName: "cni-path") pod "025e24fb-7026-4e6f-b2f2-17d07d390180" (UID: "025e24fb-7026-4e6f-b2f2-17d07d390180"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:31:14.429525 kubelet[2702]: I0813 01:31:14.429243 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26a9811f-ff63-41d6-b219-d1ffa4ebedec-kube-api-access-ws7pb" (OuterVolumeSpecName: "kube-api-access-ws7pb") pod "26a9811f-ff63-41d6-b219-d1ffa4ebedec" (UID: "26a9811f-ff63-41d6-b219-d1ffa4ebedec"). InnerVolumeSpecName "kube-api-access-ws7pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:31:14.429525 kubelet[2702]: I0813 01:31:14.429483 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/025e24fb-7026-4e6f-b2f2-17d07d390180-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "025e24fb-7026-4e6f-b2f2-17d07d390180" (UID: "025e24fb-7026-4e6f-b2f2-17d07d390180"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:31:14.431904 kubelet[2702]: I0813 01:31:14.431879 2702 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26a9811f-ff63-41d6-b219-d1ffa4ebedec-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "26a9811f-ff63-41d6-b219-d1ffa4ebedec" (UID: "26a9811f-ff63-41d6-b219-d1ffa4ebedec"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 01:31:14.522624 kubelet[2702]: I0813 01:31:14.522587 2702 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-xtables-lock\") on node \"172-233-222-9\" DevicePath \"\"" Aug 13 01:31:14.522624 kubelet[2702]: I0813 01:31:14.522617 2702 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-etc-cni-netd\") on node \"172-233-222-9\" DevicePath \"\"" Aug 13 01:31:14.522624 kubelet[2702]: I0813 01:31:14.522626 2702 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-lib-modules\") on node \"172-233-222-9\" DevicePath \"\"" Aug 13 01:31:14.522852 kubelet[2702]: I0813 01:31:14.522633 2702 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-cilium-cgroup\") on node \"172-233-222-9\" DevicePath \"\"" Aug 13 01:31:14.522852 kubelet[2702]: I0813 01:31:14.522641 2702 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-hostproc\") on node \"172-233-222-9\" DevicePath \"\"" Aug 13 01:31:14.522852 kubelet[2702]: I0813 01:31:14.522648 2702 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/025e24fb-7026-4e6f-b2f2-17d07d390180-cilium-config-path\") on node \"172-233-222-9\" DevicePath \"\"" Aug 13 01:31:14.522852 kubelet[2702]: I0813 01:31:14.522657 2702 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-cni-path\") on node \"172-233-222-9\" DevicePath \"\"" Aug 13 01:31:14.522852 kubelet[2702]: I0813 01:31:14.522666 2702 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-bpf-maps\") on node \"172-233-222-9\" DevicePath \"\"" Aug 13 01:31:14.522852 kubelet[2702]: I0813 01:31:14.522673 2702 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-host-proc-sys-net\") on node \"172-233-222-9\" DevicePath \"\"" Aug 13 01:31:14.522852 kubelet[2702]: I0813 01:31:14.522680 2702 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-host-proc-sys-kernel\") on node \"172-233-222-9\" DevicePath \"\"" Aug 13 01:31:14.522852 kubelet[2702]: I0813 01:31:14.522687 2702 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/025e24fb-7026-4e6f-b2f2-17d07d390180-clustermesh-secrets\") on node \"172-233-222-9\" DevicePath \"\"" Aug 13 01:31:14.523020 kubelet[2702]: I0813 01:31:14.522694 2702 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/025e24fb-7026-4e6f-b2f2-17d07d390180-cilium-run\") on node \"172-233-222-9\" DevicePath \"\"" Aug 13 01:31:14.523020 kubelet[2702]: I0813 01:31:14.522701 2702 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwmfd\" (UniqueName: \"kubernetes.io/projected/025e24fb-7026-4e6f-b2f2-17d07d390180-kube-api-access-rwmfd\") on node \"172-233-222-9\" DevicePath \"\"" Aug 13 01:31:14.523020 kubelet[2702]: I0813 01:31:14.522709 2702 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws7pb\" (UniqueName: \"kubernetes.io/projected/26a9811f-ff63-41d6-b219-d1ffa4ebedec-kube-api-access-ws7pb\") on node \"172-233-222-9\" DevicePath \"\"" Aug 13 01:31:14.523020 kubelet[2702]: I0813 01:31:14.522716 2702 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/025e24fb-7026-4e6f-b2f2-17d07d390180-hubble-tls\") on node \"172-233-222-9\" DevicePath \"\"" Aug 13 01:31:14.523020 kubelet[2702]: I0813 01:31:14.522723 2702 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26a9811f-ff63-41d6-b219-d1ffa4ebedec-cilium-config-path\") on node \"172-233-222-9\" DevicePath \"\"" Aug 13 01:31:14.565431 kubelet[2702]: E0813 01:31:14.565371 2702 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 01:31:15.125013 kubelet[2702]: I0813 01:31:15.124867 2702 scope.go:117] "RemoveContainer" containerID="e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4" Aug 13 01:31:15.127999 containerd[1550]: time="2025-08-13T01:31:15.127558709Z" level=info msg="RemoveContainer for \"e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4\"" Aug 13 01:31:15.131610 systemd[1]: Removed slice kubepods-besteffort-pod26a9811f_ff63_41d6_b219_d1ffa4ebedec.slice - libcontainer container kubepods-besteffort-pod26a9811f_ff63_41d6_b219_d1ffa4ebedec.slice. Aug 13 01:31:15.134164 containerd[1550]: time="2025-08-13T01:31:15.134137127Z" level=info msg="RemoveContainer for \"e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4\" returns successfully" Aug 13 01:31:15.139766 systemd[1]: Removed slice kubepods-burstable-pod025e24fb_7026_4e6f_b2f2_17d07d390180.slice - libcontainer container kubepods-burstable-pod025e24fb_7026_4e6f_b2f2_17d07d390180.slice. Aug 13 01:31:15.140255 systemd[1]: kubepods-burstable-pod025e24fb_7026_4e6f_b2f2_17d07d390180.slice: Consumed 5.359s CPU time, 124.2M memory peak, 144K read from disk, 13.3M written to disk. Aug 13 01:31:15.140702 kubelet[2702]: I0813 01:31:15.140685 2702 scope.go:117] "RemoveContainer" containerID="e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4" Aug 13 01:31:15.142071 containerd[1550]: time="2025-08-13T01:31:15.142016260Z" level=error msg="ContainerStatus for \"e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4\": not found" Aug 13 01:31:15.142464 kubelet[2702]: E0813 01:31:15.142434 2702 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4\": not found" containerID="e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4" Aug 13 01:31:15.142529 kubelet[2702]: I0813 01:31:15.142461 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4"} err="failed to get container status \"e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4\": rpc error: code = NotFound desc = an error occurred when try to find container \"e088df4071187ae3e7f77eb59723379ccd822818bada480b40f59409f73fe8e4\": not found" Aug 13 01:31:15.142566 kubelet[2702]: I0813 01:31:15.142528 2702 scope.go:117] "RemoveContainer" containerID="0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320" Aug 13 01:31:15.148333 containerd[1550]: time="2025-08-13T01:31:15.148298349Z" level=info msg="RemoveContainer for \"0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320\"" Aug 13 01:31:15.152167 containerd[1550]: time="2025-08-13T01:31:15.151949977Z" level=info msg="RemoveContainer for \"0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320\" returns successfully" Aug 13 01:31:15.152410 kubelet[2702]: I0813 01:31:15.152327 2702 scope.go:117] "RemoveContainer" containerID="c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc" Aug 13 01:31:15.156549 systemd[1]: var-lib-kubelet-pods-26a9811f\x2dff63\x2d41d6\x2db219\x2dd1ffa4ebedec-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dws7pb.mount: Deactivated successfully. Aug 13 01:31:15.158906 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94-shm.mount: Deactivated successfully. Aug 13 01:31:15.158984 systemd[1]: var-lib-kubelet-pods-025e24fb\x2d7026\x2d4e6f\x2db2f2\x2d17d07d390180-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 01:31:15.159061 systemd[1]: var-lib-kubelet-pods-025e24fb\x2d7026\x2d4e6f\x2db2f2\x2d17d07d390180-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drwmfd.mount: Deactivated successfully. Aug 13 01:31:15.159128 systemd[1]: var-lib-kubelet-pods-025e24fb\x2d7026\x2d4e6f\x2db2f2\x2d17d07d390180-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 01:31:15.166285 containerd[1550]: time="2025-08-13T01:31:15.166010219Z" level=info msg="RemoveContainer for \"c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc\"" Aug 13 01:31:15.170501 containerd[1550]: time="2025-08-13T01:31:15.170475994Z" level=info msg="RemoveContainer for \"c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc\" returns successfully" Aug 13 01:31:15.170627 kubelet[2702]: I0813 01:31:15.170607 2702 scope.go:117] "RemoveContainer" containerID="2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7" Aug 13 01:31:15.173247 containerd[1550]: time="2025-08-13T01:31:15.173184015Z" level=info msg="RemoveContainer for \"2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7\"" Aug 13 01:31:15.175996 containerd[1550]: time="2025-08-13T01:31:15.175978396Z" level=info msg="RemoveContainer for \"2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7\" returns successfully" Aug 13 01:31:15.176137 kubelet[2702]: I0813 01:31:15.176090 2702 scope.go:117] "RemoveContainer" containerID="3abbfe7bfd3c944fb15baa2e5a6dc843eefe80dbaba32fbe00e34de8168cba85" Aug 13 01:31:15.177097 containerd[1550]: time="2025-08-13T01:31:15.177075102Z" level=info msg="RemoveContainer for \"3abbfe7bfd3c944fb15baa2e5a6dc843eefe80dbaba32fbe00e34de8168cba85\"" Aug 13 01:31:15.180431 containerd[1550]: time="2025-08-13T01:31:15.180411560Z" level=info msg="RemoveContainer for \"3abbfe7bfd3c944fb15baa2e5a6dc843eefe80dbaba32fbe00e34de8168cba85\" returns successfully" Aug 13 01:31:15.180642 kubelet[2702]: I0813 01:31:15.180559 2702 scope.go:117] "RemoveContainer" containerID="05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7" Aug 13 01:31:15.181771 containerd[1550]: time="2025-08-13T01:31:15.181731196Z" level=info msg="RemoveContainer for \"05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7\"" Aug 13 01:31:15.184869 containerd[1550]: time="2025-08-13T01:31:15.184837405Z" level=info msg="RemoveContainer for \"05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7\" returns successfully" Aug 13 01:31:15.185136 kubelet[2702]: I0813 01:31:15.185065 2702 scope.go:117] "RemoveContainer" containerID="0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320" Aug 13 01:31:15.185419 containerd[1550]: time="2025-08-13T01:31:15.185341304Z" level=error msg="ContainerStatus for \"0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320\": not found" Aug 13 01:31:15.185516 kubelet[2702]: E0813 01:31:15.185497 2702 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320\": not found" containerID="0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320" Aug 13 01:31:15.186169 kubelet[2702]: I0813 01:31:15.185519 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320"} err="failed to get container status \"0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320\": rpc error: code = NotFound desc = an error occurred when try to find container \"0e9fd431d13e07a2cfe5b9b322988eb18f58311a40cbbcd6c656ba273d38c320\": not found" Aug 13 01:31:15.186249 kubelet[2702]: I0813 01:31:15.186171 2702 scope.go:117] "RemoveContainer" containerID="c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc" Aug 13 01:31:15.186393 containerd[1550]: time="2025-08-13T01:31:15.186297231Z" level=error msg="ContainerStatus for \"c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc\": not found" Aug 13 01:31:15.186530 kubelet[2702]: E0813 01:31:15.186513 2702 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc\": not found" containerID="c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc" Aug 13 01:31:15.186530 kubelet[2702]: I0813 01:31:15.186532 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc"} err="failed to get container status \"c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"c3bbf864a51520b5b93be9f112b1b6aaa36db2ff80b8cd089a2efaab69f914fc\": not found" Aug 13 01:31:15.186530 kubelet[2702]: I0813 01:31:15.186544 2702 scope.go:117] "RemoveContainer" containerID="2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7" Aug 13 01:31:15.186680 containerd[1550]: time="2025-08-13T01:31:15.186657789Z" level=error msg="ContainerStatus for \"2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7\": not found" Aug 13 01:31:15.186773 kubelet[2702]: E0813 01:31:15.186756 2702 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7\": not found" containerID="2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7" Aug 13 01:31:15.186848 kubelet[2702]: I0813 01:31:15.186818 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7"} err="failed to get container status \"2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7\": rpc error: code = NotFound desc = an error occurred when try to find container \"2226b4c8b37ddcf03f5861ba6e886d2bd397f835c37d01e8471efe8d89e8adc7\": not found" Aug 13 01:31:15.186848 kubelet[2702]: I0813 01:31:15.186832 2702 scope.go:117] "RemoveContainer" containerID="3abbfe7bfd3c944fb15baa2e5a6dc843eefe80dbaba32fbe00e34de8168cba85" Aug 13 01:31:15.186975 containerd[1550]: time="2025-08-13T01:31:15.186953029Z" level=error msg="ContainerStatus for \"3abbfe7bfd3c944fb15baa2e5a6dc843eefe80dbaba32fbe00e34de8168cba85\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3abbfe7bfd3c944fb15baa2e5a6dc843eefe80dbaba32fbe00e34de8168cba85\": not found" Aug 13 01:31:15.187084 kubelet[2702]: E0813 01:31:15.187045 2702 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3abbfe7bfd3c944fb15baa2e5a6dc843eefe80dbaba32fbe00e34de8168cba85\": not found" containerID="3abbfe7bfd3c944fb15baa2e5a6dc843eefe80dbaba32fbe00e34de8168cba85" Aug 13 01:31:15.187128 kubelet[2702]: I0813 01:31:15.187063 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3abbfe7bfd3c944fb15baa2e5a6dc843eefe80dbaba32fbe00e34de8168cba85"} err="failed to get container status \"3abbfe7bfd3c944fb15baa2e5a6dc843eefe80dbaba32fbe00e34de8168cba85\": rpc error: code = NotFound desc = an error occurred when try to find container \"3abbfe7bfd3c944fb15baa2e5a6dc843eefe80dbaba32fbe00e34de8168cba85\": not found" Aug 13 01:31:15.187183 kubelet[2702]: I0813 01:31:15.187128 2702 scope.go:117] "RemoveContainer" containerID="05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7" Aug 13 01:31:15.187247 containerd[1550]: time="2025-08-13T01:31:15.187216958Z" level=error msg="ContainerStatus for \"05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7\": not found" Aug 13 01:31:15.187338 kubelet[2702]: E0813 01:31:15.187322 2702 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7\": not found" containerID="05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7" Aug 13 01:31:15.187398 kubelet[2702]: I0813 01:31:15.187339 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7"} err="failed to get container status \"05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"05d9acd6cf28bd8356dbd20fbd667323116a96d40007c1ad1d59a03350a018f7\": not found" Aug 13 01:31:15.450539 kubelet[2702]: I0813 01:31:15.450465 2702 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="025e24fb-7026-4e6f-b2f2-17d07d390180" path="/var/lib/kubelet/pods/025e24fb-7026-4e6f-b2f2-17d07d390180/volumes" Aug 13 01:31:15.451493 kubelet[2702]: I0813 01:31:15.451181 2702 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26a9811f-ff63-41d6-b219-d1ffa4ebedec" path="/var/lib/kubelet/pods/26a9811f-ff63-41d6-b219-d1ffa4ebedec/volumes" Aug 13 01:31:16.121094 sshd[4511]: Connection closed by 147.75.109.163 port 46196 Aug 13 01:31:16.121374 sshd-session[4509]: pam_unix(sshd:session): session closed for user core Aug 13 01:31:16.125750 systemd[1]: sshd@53-172.233.222.9:22-147.75.109.163:46196.service: Deactivated successfully. Aug 13 01:31:16.127679 systemd[1]: session-54.scope: Deactivated successfully. Aug 13 01:31:16.128900 systemd-logind[1515]: Session 54 logged out. Waiting for processes to exit. Aug 13 01:31:16.130367 systemd-logind[1515]: Removed session 54. Aug 13 01:31:16.179203 systemd[1]: Started sshd@54-172.233.222.9:22-147.75.109.163:46210.service - OpenSSH per-connection server daemon (147.75.109.163:46210). Aug 13 01:31:16.455233 kubelet[2702]: I0813 01:31:16.454912 2702 setters.go:600] "Node became not ready" node="172-233-222-9" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T01:31:16Z","lastTransitionTime":"2025-08-13T01:31:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 01:31:16.519282 sshd[4670]: Accepted publickey for core from 147.75.109.163 port 46210 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:31:16.519832 sshd-session[4670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:31:16.524531 systemd-logind[1515]: New session 55 of user core. Aug 13 01:31:16.530917 systemd[1]: Started session-55.scope - Session 55 of User core. Aug 13 01:31:17.141462 kubelet[2702]: E0813 01:31:17.139849 2702 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="025e24fb-7026-4e6f-b2f2-17d07d390180" containerName="apply-sysctl-overwrites" Aug 13 01:31:17.141462 kubelet[2702]: E0813 01:31:17.139873 2702 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="26a9811f-ff63-41d6-b219-d1ffa4ebedec" containerName="cilium-operator" Aug 13 01:31:17.141462 kubelet[2702]: E0813 01:31:17.139879 2702 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="025e24fb-7026-4e6f-b2f2-17d07d390180" containerName="clean-cilium-state" Aug 13 01:31:17.141462 kubelet[2702]: E0813 01:31:17.139885 2702 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="025e24fb-7026-4e6f-b2f2-17d07d390180" containerName="cilium-agent" Aug 13 01:31:17.141462 kubelet[2702]: E0813 01:31:17.139891 2702 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="025e24fb-7026-4e6f-b2f2-17d07d390180" containerName="mount-cgroup" Aug 13 01:31:17.141462 kubelet[2702]: E0813 01:31:17.139897 2702 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="025e24fb-7026-4e6f-b2f2-17d07d390180" containerName="mount-bpf-fs" Aug 13 01:31:17.141462 kubelet[2702]: I0813 01:31:17.139915 2702 memory_manager.go:354] "RemoveStaleState removing state" podUID="025e24fb-7026-4e6f-b2f2-17d07d390180" containerName="cilium-agent" Aug 13 01:31:17.141462 kubelet[2702]: I0813 01:31:17.139921 2702 memory_manager.go:354] "RemoveStaleState removing state" podUID="26a9811f-ff63-41d6-b219-d1ffa4ebedec" containerName="cilium-operator" Aug 13 01:31:17.150835 systemd[1]: Created slice kubepods-burstable-pod4ee22df0_d732_4332_9a31_3f859653097b.slice - libcontainer container kubepods-burstable-pod4ee22df0_d732_4332_9a31_3f859653097b.slice. Aug 13 01:31:17.177748 sshd[4672]: Connection closed by 147.75.109.163 port 46210 Aug 13 01:31:17.179023 sshd-session[4670]: pam_unix(sshd:session): session closed for user core Aug 13 01:31:17.182595 systemd-logind[1515]: Session 55 logged out. Waiting for processes to exit. Aug 13 01:31:17.183161 systemd[1]: sshd@54-172.233.222.9:22-147.75.109.163:46210.service: Deactivated successfully. Aug 13 01:31:17.185544 systemd[1]: session-55.scope: Deactivated successfully. Aug 13 01:31:17.187430 systemd-logind[1515]: Removed session 55. Aug 13 01:31:17.237959 kubelet[2702]: I0813 01:31:17.237927 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4ee22df0-d732-4332-9a31-3f859653097b-cilium-run\") pod \"cilium-bhkdb\" (UID: \"4ee22df0-d732-4332-9a31-3f859653097b\") " pod="kube-system/cilium-bhkdb" Aug 13 01:31:17.238014 kubelet[2702]: I0813 01:31:17.237962 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4ee22df0-d732-4332-9a31-3f859653097b-clustermesh-secrets\") pod \"cilium-bhkdb\" (UID: \"4ee22df0-d732-4332-9a31-3f859653097b\") " pod="kube-system/cilium-bhkdb" Aug 13 01:31:17.238014 kubelet[2702]: I0813 01:31:17.237981 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4ee22df0-d732-4332-9a31-3f859653097b-cilium-config-path\") pod \"cilium-bhkdb\" (UID: \"4ee22df0-d732-4332-9a31-3f859653097b\") " pod="kube-system/cilium-bhkdb" Aug 13 01:31:17.238014 kubelet[2702]: I0813 01:31:17.237994 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4ee22df0-d732-4332-9a31-3f859653097b-bpf-maps\") pod \"cilium-bhkdb\" (UID: \"4ee22df0-d732-4332-9a31-3f859653097b\") " pod="kube-system/cilium-bhkdb" Aug 13 01:31:17.238014 kubelet[2702]: I0813 01:31:17.238007 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4ee22df0-d732-4332-9a31-3f859653097b-hostproc\") pod \"cilium-bhkdb\" (UID: \"4ee22df0-d732-4332-9a31-3f859653097b\") " pod="kube-system/cilium-bhkdb" Aug 13 01:31:17.238109 kubelet[2702]: I0813 01:31:17.238019 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ee22df0-d732-4332-9a31-3f859653097b-xtables-lock\") pod \"cilium-bhkdb\" (UID: \"4ee22df0-d732-4332-9a31-3f859653097b\") " pod="kube-system/cilium-bhkdb" Aug 13 01:31:17.238109 kubelet[2702]: I0813 01:31:17.238035 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4ee22df0-d732-4332-9a31-3f859653097b-host-proc-sys-kernel\") pod \"cilium-bhkdb\" (UID: \"4ee22df0-d732-4332-9a31-3f859653097b\") " pod="kube-system/cilium-bhkdb" Aug 13 01:31:17.238109 kubelet[2702]: I0813 01:31:17.238047 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4ee22df0-d732-4332-9a31-3f859653097b-hubble-tls\") pod \"cilium-bhkdb\" (UID: \"4ee22df0-d732-4332-9a31-3f859653097b\") " pod="kube-system/cilium-bhkdb" Aug 13 01:31:17.238109 kubelet[2702]: I0813 01:31:17.238060 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4ee22df0-d732-4332-9a31-3f859653097b-host-proc-sys-net\") pod \"cilium-bhkdb\" (UID: \"4ee22df0-d732-4332-9a31-3f859653097b\") " pod="kube-system/cilium-bhkdb" Aug 13 01:31:17.238109 kubelet[2702]: I0813 01:31:17.238074 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ee22df0-d732-4332-9a31-3f859653097b-lib-modules\") pod \"cilium-bhkdb\" (UID: \"4ee22df0-d732-4332-9a31-3f859653097b\") " pod="kube-system/cilium-bhkdb" Aug 13 01:31:17.238109 kubelet[2702]: I0813 01:31:17.238088 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4ee22df0-d732-4332-9a31-3f859653097b-cni-path\") pod \"cilium-bhkdb\" (UID: \"4ee22df0-d732-4332-9a31-3f859653097b\") " pod="kube-system/cilium-bhkdb" Aug 13 01:31:17.238263 kubelet[2702]: I0813 01:31:17.238101 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ee22df0-d732-4332-9a31-3f859653097b-etc-cni-netd\") pod \"cilium-bhkdb\" (UID: \"4ee22df0-d732-4332-9a31-3f859653097b\") " pod="kube-system/cilium-bhkdb" Aug 13 01:31:17.238263 kubelet[2702]: I0813 01:31:17.238115 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4ee22df0-d732-4332-9a31-3f859653097b-cilium-ipsec-secrets\") pod \"cilium-bhkdb\" (UID: \"4ee22df0-d732-4332-9a31-3f859653097b\") " pod="kube-system/cilium-bhkdb" Aug 13 01:31:17.238263 kubelet[2702]: I0813 01:31:17.238128 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4ee22df0-d732-4332-9a31-3f859653097b-cilium-cgroup\") pod \"cilium-bhkdb\" (UID: \"4ee22df0-d732-4332-9a31-3f859653097b\") " pod="kube-system/cilium-bhkdb" Aug 13 01:31:17.238263 kubelet[2702]: I0813 01:31:17.238141 2702 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdcdh\" (UniqueName: \"kubernetes.io/projected/4ee22df0-d732-4332-9a31-3f859653097b-kube-api-access-gdcdh\") pod \"cilium-bhkdb\" (UID: \"4ee22df0-d732-4332-9a31-3f859653097b\") " pod="kube-system/cilium-bhkdb" Aug 13 01:31:17.239936 systemd[1]: Started sshd@55-172.233.222.9:22-147.75.109.163:46220.service - OpenSSH per-connection server daemon (147.75.109.163:46220). Aug 13 01:31:17.449451 kubelet[2702]: E0813 01:31:17.449147 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:31:17.453354 kubelet[2702]: E0813 01:31:17.453324 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:31:17.453776 containerd[1550]: time="2025-08-13T01:31:17.453741860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bhkdb,Uid:4ee22df0-d732-4332-9a31-3f859653097b,Namespace:kube-system,Attempt:0,}" Aug 13 01:31:17.467895 containerd[1550]: time="2025-08-13T01:31:17.467859363Z" level=info msg="connecting to shim 17baa37f0159d5b846821447e9f813d0ea18cfd1ebd687da1ddcafb24f131c40" address="unix:///run/containerd/s/4646be9579aa073c7ea694f938a24878f064da29ead0f88c1629e1c0a885f778" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:31:17.490911 systemd[1]: Started cri-containerd-17baa37f0159d5b846821447e9f813d0ea18cfd1ebd687da1ddcafb24f131c40.scope - libcontainer container 17baa37f0159d5b846821447e9f813d0ea18cfd1ebd687da1ddcafb24f131c40. Aug 13 01:31:17.519057 containerd[1550]: time="2025-08-13T01:31:17.518920301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bhkdb,Uid:4ee22df0-d732-4332-9a31-3f859653097b,Namespace:kube-system,Attempt:0,} returns sandbox id \"17baa37f0159d5b846821447e9f813d0ea18cfd1ebd687da1ddcafb24f131c40\"" Aug 13 01:31:17.520585 kubelet[2702]: E0813 01:31:17.520516 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:31:17.523838 containerd[1550]: time="2025-08-13T01:31:17.523053557Z" level=info msg="CreateContainer within sandbox \"17baa37f0159d5b846821447e9f813d0ea18cfd1ebd687da1ddcafb24f131c40\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 01:31:17.528269 containerd[1550]: time="2025-08-13T01:31:17.528240339Z" level=info msg="Container 7b75e5e3df0c2f6eece5623e8094e3988bf84337559d98f052a5cf0d95b1c02c: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:31:17.532381 containerd[1550]: time="2025-08-13T01:31:17.532347365Z" level=info msg="CreateContainer within sandbox \"17baa37f0159d5b846821447e9f813d0ea18cfd1ebd687da1ddcafb24f131c40\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7b75e5e3df0c2f6eece5623e8094e3988bf84337559d98f052a5cf0d95b1c02c\"" Aug 13 01:31:17.532926 containerd[1550]: time="2025-08-13T01:31:17.532880534Z" level=info msg="StartContainer for \"7b75e5e3df0c2f6eece5623e8094e3988bf84337559d98f052a5cf0d95b1c02c\"" Aug 13 01:31:17.533920 containerd[1550]: time="2025-08-13T01:31:17.533878170Z" level=info msg="connecting to shim 7b75e5e3df0c2f6eece5623e8094e3988bf84337559d98f052a5cf0d95b1c02c" address="unix:///run/containerd/s/4646be9579aa073c7ea694f938a24878f064da29ead0f88c1629e1c0a885f778" protocol=ttrpc version=3 Aug 13 01:31:17.551931 systemd[1]: Started cri-containerd-7b75e5e3df0c2f6eece5623e8094e3988bf84337559d98f052a5cf0d95b1c02c.scope - libcontainer container 7b75e5e3df0c2f6eece5623e8094e3988bf84337559d98f052a5cf0d95b1c02c. Aug 13 01:31:17.580043 sshd[4683]: Accepted publickey for core from 147.75.109.163 port 46220 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:31:17.581872 sshd-session[4683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:31:17.589205 containerd[1550]: time="2025-08-13T01:31:17.589173344Z" level=info msg="StartContainer for \"7b75e5e3df0c2f6eece5623e8094e3988bf84337559d98f052a5cf0d95b1c02c\" returns successfully" Aug 13 01:31:17.590313 systemd-logind[1515]: New session 56 of user core. Aug 13 01:31:17.596964 systemd[1]: Started session-56.scope - Session 56 of User core. Aug 13 01:31:17.597632 systemd[1]: cri-containerd-7b75e5e3df0c2f6eece5623e8094e3988bf84337559d98f052a5cf0d95b1c02c.scope: Deactivated successfully. Aug 13 01:31:17.598843 containerd[1550]: time="2025-08-13T01:31:17.598737722Z" level=info msg="received exit event container_id:\"7b75e5e3df0c2f6eece5623e8094e3988bf84337559d98f052a5cf0d95b1c02c\" id:\"7b75e5e3df0c2f6eece5623e8094e3988bf84337559d98f052a5cf0d95b1c02c\" pid:4746 exited_at:{seconds:1755048677 nanos:598500612}" Aug 13 01:31:17.599604 containerd[1550]: time="2025-08-13T01:31:17.599586169Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b75e5e3df0c2f6eece5623e8094e3988bf84337559d98f052a5cf0d95b1c02c\" id:\"7b75e5e3df0c2f6eece5623e8094e3988bf84337559d98f052a5cf0d95b1c02c\" pid:4746 exited_at:{seconds:1755048677 nanos:598500612}" Aug 13 01:31:17.823527 sshd[4765]: Connection closed by 147.75.109.163 port 46220 Aug 13 01:31:17.824238 sshd-session[4683]: pam_unix(sshd:session): session closed for user core Aug 13 01:31:17.828698 systemd-logind[1515]: Session 56 logged out. Waiting for processes to exit. Aug 13 01:31:17.829331 systemd[1]: sshd@55-172.233.222.9:22-147.75.109.163:46220.service: Deactivated successfully. Aug 13 01:31:17.832129 systemd[1]: session-56.scope: Deactivated successfully. Aug 13 01:31:17.833861 systemd-logind[1515]: Removed session 56. Aug 13 01:31:17.886464 systemd[1]: Started sshd@56-172.233.222.9:22-147.75.109.163:46234.service - OpenSSH per-connection server daemon (147.75.109.163:46234). Aug 13 01:31:18.142870 kubelet[2702]: E0813 01:31:18.141926 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:31:18.146107 containerd[1550]: time="2025-08-13T01:31:18.144842623Z" level=info msg="CreateContainer within sandbox \"17baa37f0159d5b846821447e9f813d0ea18cfd1ebd687da1ddcafb24f131c40\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 01:31:18.152599 containerd[1550]: time="2025-08-13T01:31:18.152540877Z" level=info msg="Container dcda9d1466a9377fc81d6399bef7cc03c3d18c2dc7aaddadd23d8d4b85771e2c: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:31:18.157291 containerd[1550]: time="2025-08-13T01:31:18.157236512Z" level=info msg="CreateContainer within sandbox \"17baa37f0159d5b846821447e9f813d0ea18cfd1ebd687da1ddcafb24f131c40\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dcda9d1466a9377fc81d6399bef7cc03c3d18c2dc7aaddadd23d8d4b85771e2c\"" Aug 13 01:31:18.158193 containerd[1550]: time="2025-08-13T01:31:18.158166508Z" level=info msg="StartContainer for \"dcda9d1466a9377fc81d6399bef7cc03c3d18c2dc7aaddadd23d8d4b85771e2c\"" Aug 13 01:31:18.158945 containerd[1550]: time="2025-08-13T01:31:18.158895495Z" level=info msg="connecting to shim dcda9d1466a9377fc81d6399bef7cc03c3d18c2dc7aaddadd23d8d4b85771e2c" address="unix:///run/containerd/s/4646be9579aa073c7ea694f938a24878f064da29ead0f88c1629e1c0a885f778" protocol=ttrpc version=3 Aug 13 01:31:18.180916 systemd[1]: Started cri-containerd-dcda9d1466a9377fc81d6399bef7cc03c3d18c2dc7aaddadd23d8d4b85771e2c.scope - libcontainer container dcda9d1466a9377fc81d6399bef7cc03c3d18c2dc7aaddadd23d8d4b85771e2c. Aug 13 01:31:18.213646 containerd[1550]: time="2025-08-13T01:31:18.213599502Z" level=info msg="StartContainer for \"dcda9d1466a9377fc81d6399bef7cc03c3d18c2dc7aaddadd23d8d4b85771e2c\" returns successfully" Aug 13 01:31:18.224655 sshd[4783]: Accepted publickey for core from 147.75.109.163 port 46234 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:31:18.225322 systemd[1]: cri-containerd-dcda9d1466a9377fc81d6399bef7cc03c3d18c2dc7aaddadd23d8d4b85771e2c.scope: Deactivated successfully. Aug 13 01:31:18.227827 containerd[1550]: time="2025-08-13T01:31:18.227730945Z" level=info msg="received exit event container_id:\"dcda9d1466a9377fc81d6399bef7cc03c3d18c2dc7aaddadd23d8d4b85771e2c\" id:\"dcda9d1466a9377fc81d6399bef7cc03c3d18c2dc7aaddadd23d8d4b85771e2c\" pid:4798 exited_at:{seconds:1755048678 nanos:227090167}" Aug 13 01:31:18.228311 containerd[1550]: time="2025-08-13T01:31:18.228227503Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dcda9d1466a9377fc81d6399bef7cc03c3d18c2dc7aaddadd23d8d4b85771e2c\" id:\"dcda9d1466a9377fc81d6399bef7cc03c3d18c2dc7aaddadd23d8d4b85771e2c\" pid:4798 exited_at:{seconds:1755048678 nanos:227090167}" Aug 13 01:31:18.228976 sshd-session[4783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:31:18.236968 systemd-logind[1515]: New session 57 of user core. Aug 13 01:31:18.241977 systemd[1]: Started session-57.scope - Session 57 of User core. Aug 13 01:31:19.145709 kubelet[2702]: E0813 01:31:19.145663 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:31:19.148415 containerd[1550]: time="2025-08-13T01:31:19.148094250Z" level=info msg="CreateContainer within sandbox \"17baa37f0159d5b846821447e9f813d0ea18cfd1ebd687da1ddcafb24f131c40\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 01:31:19.160276 containerd[1550]: time="2025-08-13T01:31:19.160218159Z" level=info msg="Container ae514f0c319e8c0933c1b7e470748e2d19f667531a39a67722b1ac4dcb948a18: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:31:19.166997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2108927171.mount: Deactivated successfully. Aug 13 01:31:19.170626 containerd[1550]: time="2025-08-13T01:31:19.170594864Z" level=info msg="CreateContainer within sandbox \"17baa37f0159d5b846821447e9f813d0ea18cfd1ebd687da1ddcafb24f131c40\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ae514f0c319e8c0933c1b7e470748e2d19f667531a39a67722b1ac4dcb948a18\"" Aug 13 01:31:19.171167 containerd[1550]: time="2025-08-13T01:31:19.171138442Z" level=info msg="StartContainer for \"ae514f0c319e8c0933c1b7e470748e2d19f667531a39a67722b1ac4dcb948a18\"" Aug 13 01:31:19.172980 containerd[1550]: time="2025-08-13T01:31:19.172921206Z" level=info msg="connecting to shim ae514f0c319e8c0933c1b7e470748e2d19f667531a39a67722b1ac4dcb948a18" address="unix:///run/containerd/s/4646be9579aa073c7ea694f938a24878f064da29ead0f88c1629e1c0a885f778" protocol=ttrpc version=3 Aug 13 01:31:19.197935 systemd[1]: Started cri-containerd-ae514f0c319e8c0933c1b7e470748e2d19f667531a39a67722b1ac4dcb948a18.scope - libcontainer container ae514f0c319e8c0933c1b7e470748e2d19f667531a39a67722b1ac4dcb948a18. Aug 13 01:31:19.238584 systemd[1]: cri-containerd-ae514f0c319e8c0933c1b7e470748e2d19f667531a39a67722b1ac4dcb948a18.scope: Deactivated successfully. Aug 13 01:31:19.240086 containerd[1550]: time="2025-08-13T01:31:19.239971082Z" level=info msg="StartContainer for \"ae514f0c319e8c0933c1b7e470748e2d19f667531a39a67722b1ac4dcb948a18\" returns successfully" Aug 13 01:31:19.241732 containerd[1550]: time="2025-08-13T01:31:19.241697405Z" level=info msg="received exit event container_id:\"ae514f0c319e8c0933c1b7e470748e2d19f667531a39a67722b1ac4dcb948a18\" id:\"ae514f0c319e8c0933c1b7e470748e2d19f667531a39a67722b1ac4dcb948a18\" pid:4851 exited_at:{seconds:1755048679 nanos:241504966}" Aug 13 01:31:19.242579 containerd[1550]: time="2025-08-13T01:31:19.242543313Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae514f0c319e8c0933c1b7e470748e2d19f667531a39a67722b1ac4dcb948a18\" id:\"ae514f0c319e8c0933c1b7e470748e2d19f667531a39a67722b1ac4dcb948a18\" pid:4851 exited_at:{seconds:1755048679 nanos:241504966}" Aug 13 01:31:19.266576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae514f0c319e8c0933c1b7e470748e2d19f667531a39a67722b1ac4dcb948a18-rootfs.mount: Deactivated successfully. Aug 13 01:31:19.457064 containerd[1550]: time="2025-08-13T01:31:19.456931643Z" level=info msg="StopPodSandbox for \"dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94\"" Aug 13 01:31:19.457368 containerd[1550]: time="2025-08-13T01:31:19.457079822Z" level=info msg="TearDown network for sandbox \"dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94\" successfully" Aug 13 01:31:19.457368 containerd[1550]: time="2025-08-13T01:31:19.457091912Z" level=info msg="StopPodSandbox for \"dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94\" returns successfully" Aug 13 01:31:19.457835 containerd[1550]: time="2025-08-13T01:31:19.457812240Z" level=info msg="RemovePodSandbox for \"dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94\"" Aug 13 01:31:19.457835 containerd[1550]: time="2025-08-13T01:31:19.457835580Z" level=info msg="Forcibly stopping sandbox \"dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94\"" Aug 13 01:31:19.457925 containerd[1550]: time="2025-08-13T01:31:19.457885390Z" level=info msg="TearDown network for sandbox \"dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94\" successfully" Aug 13 01:31:19.459055 containerd[1550]: time="2025-08-13T01:31:19.459021146Z" level=info msg="Ensure that sandbox dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94 in task-service has been cleanup successfully" Aug 13 01:31:19.461980 containerd[1550]: time="2025-08-13T01:31:19.461909937Z" level=info msg="RemovePodSandbox \"dc6c6e5957010507f078bf6f980c987e9284ec777de110bc898e67ea67148d94\" returns successfully" Aug 13 01:31:19.462440 containerd[1550]: time="2025-08-13T01:31:19.462410175Z" level=info msg="StopPodSandbox for \"4ab2a630fec69e190ef075275cfed65ff6981ecfd052af7baf03bceeb7051f59\"" Aug 13 01:31:19.462608 containerd[1550]: time="2025-08-13T01:31:19.462586654Z" level=info msg="TearDown network for sandbox \"4ab2a630fec69e190ef075275cfed65ff6981ecfd052af7baf03bceeb7051f59\" successfully" Aug 13 01:31:19.462608 containerd[1550]: time="2025-08-13T01:31:19.462602604Z" level=info msg="StopPodSandbox for \"4ab2a630fec69e190ef075275cfed65ff6981ecfd052af7baf03bceeb7051f59\" returns successfully" Aug 13 01:31:19.465649 containerd[1550]: time="2025-08-13T01:31:19.465599024Z" level=info msg="RemovePodSandbox for \"4ab2a630fec69e190ef075275cfed65ff6981ecfd052af7baf03bceeb7051f59\"" Aug 13 01:31:19.465649 containerd[1550]: time="2025-08-13T01:31:19.465638623Z" level=info msg="Forcibly stopping sandbox \"4ab2a630fec69e190ef075275cfed65ff6981ecfd052af7baf03bceeb7051f59\"" Aug 13 01:31:19.465750 containerd[1550]: time="2025-08-13T01:31:19.465698463Z" level=info msg="TearDown network for sandbox \"4ab2a630fec69e190ef075275cfed65ff6981ecfd052af7baf03bceeb7051f59\" successfully" Aug 13 01:31:19.466976 containerd[1550]: time="2025-08-13T01:31:19.466942489Z" level=info msg="Ensure that sandbox 4ab2a630fec69e190ef075275cfed65ff6981ecfd052af7baf03bceeb7051f59 in task-service has been cleanup successfully" Aug 13 01:31:19.468749 containerd[1550]: time="2025-08-13T01:31:19.468720124Z" level=info msg="RemovePodSandbox \"4ab2a630fec69e190ef075275cfed65ff6981ecfd052af7baf03bceeb7051f59\" returns successfully" Aug 13 01:31:19.566879 kubelet[2702]: E0813 01:31:19.566833 2702 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 01:31:20.151677 kubelet[2702]: E0813 01:31:20.151617 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:31:20.154744 containerd[1550]: time="2025-08-13T01:31:20.154692071Z" level=info msg="CreateContainer within sandbox \"17baa37f0159d5b846821447e9f813d0ea18cfd1ebd687da1ddcafb24f131c40\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 01:31:20.166881 containerd[1550]: time="2025-08-13T01:31:20.164878997Z" level=info msg="Container 82f712f65265c1ff48e99714bc0c581c667453eb619cde55c9848e3db6beef76: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:31:20.175309 containerd[1550]: time="2025-08-13T01:31:20.172777511Z" level=info msg="CreateContainer within sandbox \"17baa37f0159d5b846821447e9f813d0ea18cfd1ebd687da1ddcafb24f131c40\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"82f712f65265c1ff48e99714bc0c581c667453eb619cde55c9848e3db6beef76\"" Aug 13 01:31:20.179281 containerd[1550]: time="2025-08-13T01:31:20.179172989Z" level=info msg="StartContainer for \"82f712f65265c1ff48e99714bc0c581c667453eb619cde55c9848e3db6beef76\"" Aug 13 01:31:20.181681 containerd[1550]: time="2025-08-13T01:31:20.181627810Z" level=info msg="connecting to shim 82f712f65265c1ff48e99714bc0c581c667453eb619cde55c9848e3db6beef76" address="unix:///run/containerd/s/4646be9579aa073c7ea694f938a24878f064da29ead0f88c1629e1c0a885f778" protocol=ttrpc version=3 Aug 13 01:31:20.207931 systemd[1]: Started cri-containerd-82f712f65265c1ff48e99714bc0c581c667453eb619cde55c9848e3db6beef76.scope - libcontainer container 82f712f65265c1ff48e99714bc0c581c667453eb619cde55c9848e3db6beef76. Aug 13 01:31:20.239186 systemd[1]: cri-containerd-82f712f65265c1ff48e99714bc0c581c667453eb619cde55c9848e3db6beef76.scope: Deactivated successfully. Aug 13 01:31:20.239894 containerd[1550]: time="2025-08-13T01:31:20.239643276Z" level=info msg="TaskExit event in podsandbox handler container_id:\"82f712f65265c1ff48e99714bc0c581c667453eb619cde55c9848e3db6beef76\" id:\"82f712f65265c1ff48e99714bc0c581c667453eb619cde55c9848e3db6beef76\" pid:4895 exited_at:{seconds:1755048680 nanos:239358377}" Aug 13 01:31:20.240259 containerd[1550]: time="2025-08-13T01:31:20.240203984Z" level=info msg="received exit event container_id:\"82f712f65265c1ff48e99714bc0c581c667453eb619cde55c9848e3db6beef76\" id:\"82f712f65265c1ff48e99714bc0c581c667453eb619cde55c9848e3db6beef76\" pid:4895 exited_at:{seconds:1755048680 nanos:239358377}" Aug 13 01:31:20.248125 containerd[1550]: time="2025-08-13T01:31:20.248014218Z" level=info msg="StartContainer for \"82f712f65265c1ff48e99714bc0c581c667453eb619cde55c9848e3db6beef76\" returns successfully" Aug 13 01:31:20.267751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82f712f65265c1ff48e99714bc0c581c667453eb619cde55c9848e3db6beef76-rootfs.mount: Deactivated successfully. Aug 13 01:31:21.159456 kubelet[2702]: E0813 01:31:21.159414 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:31:21.163393 containerd[1550]: time="2025-08-13T01:31:21.163234240Z" level=info msg="CreateContainer within sandbox \"17baa37f0159d5b846821447e9f813d0ea18cfd1ebd687da1ddcafb24f131c40\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 01:31:21.177515 containerd[1550]: time="2025-08-13T01:31:21.177380123Z" level=info msg="Container bfb5c5311f7e7a167e33f891cf174f8a54d72cf003f8b778922ddc53c0bbc742: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:31:21.180072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount346416139.mount: Deactivated successfully. Aug 13 01:31:21.189864 containerd[1550]: time="2025-08-13T01:31:21.189767661Z" level=info msg="CreateContainer within sandbox \"17baa37f0159d5b846821447e9f813d0ea18cfd1ebd687da1ddcafb24f131c40\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bfb5c5311f7e7a167e33f891cf174f8a54d72cf003f8b778922ddc53c0bbc742\"" Aug 13 01:31:21.191151 containerd[1550]: time="2025-08-13T01:31:21.191035196Z" level=info msg="StartContainer for \"bfb5c5311f7e7a167e33f891cf174f8a54d72cf003f8b778922ddc53c0bbc742\"" Aug 13 01:31:21.192291 containerd[1550]: time="2025-08-13T01:31:21.192271272Z" level=info msg="connecting to shim bfb5c5311f7e7a167e33f891cf174f8a54d72cf003f8b778922ddc53c0bbc742" address="unix:///run/containerd/s/4646be9579aa073c7ea694f938a24878f064da29ead0f88c1629e1c0a885f778" protocol=ttrpc version=3 Aug 13 01:31:21.222954 systemd[1]: Started cri-containerd-bfb5c5311f7e7a167e33f891cf174f8a54d72cf003f8b778922ddc53c0bbc742.scope - libcontainer container bfb5c5311f7e7a167e33f891cf174f8a54d72cf003f8b778922ddc53c0bbc742. Aug 13 01:31:21.258645 containerd[1550]: time="2025-08-13T01:31:21.258567560Z" level=info msg="StartContainer for \"bfb5c5311f7e7a167e33f891cf174f8a54d72cf003f8b778922ddc53c0bbc742\" returns successfully" Aug 13 01:31:21.347775 containerd[1550]: time="2025-08-13T01:31:21.347702152Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfb5c5311f7e7a167e33f891cf174f8a54d72cf003f8b778922ddc53c0bbc742\" id:\"d25d038c7751960fa9336e30acf22847bad613163647c9bca3f0cf092088e9ea\" pid:4961 exited_at:{seconds:1755048681 nanos:347168564}" Aug 13 01:31:21.697838 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Aug 13 01:31:22.166417 kubelet[2702]: E0813 01:31:22.165060 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:31:22.178368 kubelet[2702]: I0813 01:31:22.177933 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bhkdb" podStartSLOduration=5.177917223 podStartE2EDuration="5.177917223s" podCreationTimestamp="2025-08-13 01:31:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:31:22.176827547 +0000 UTC m=+362.804062828" watchObservedRunningTime="2025-08-13 01:31:22.177917223 +0000 UTC m=+362.805152504" Aug 13 01:31:22.595571 containerd[1550]: time="2025-08-13T01:31:22.595533426Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfb5c5311f7e7a167e33f891cf174f8a54d72cf003f8b778922ddc53c0bbc742\" id:\"fb7e08cf7ac1dd009c4c30a9afd415f50a0397929223a23fd2c1ea5ccb936b8a\" pid:5042 exit_status:1 exited_at:{seconds:1755048682 nanos:595148517}" Aug 13 01:31:22.598709 kubelet[2702]: E0813 01:31:22.598680 2702 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:42764->127.0.0.1:34981: write tcp 127.0.0.1:42764->127.0.0.1:34981: write: broken pipe Aug 13 01:31:23.460809 kubelet[2702]: E0813 01:31:23.460747 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:31:24.281549 systemd-networkd[1466]: lxc_health: Link UP Aug 13 01:31:24.285030 systemd-networkd[1466]: lxc_health: Gained carrier Aug 13 01:31:24.716246 containerd[1550]: time="2025-08-13T01:31:24.716009267Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfb5c5311f7e7a167e33f891cf174f8a54d72cf003f8b778922ddc53c0bbc742\" id:\"21835a546a1331662798d48806337a4ee8c7758d81f355e337fd09769db86f24\" pid:5465 exited_at:{seconds:1755048684 nanos:715332689}" Aug 13 01:31:25.456447 kubelet[2702]: E0813 01:31:25.455391 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:31:25.699069 systemd-networkd[1466]: lxc_health: Gained IPv6LL Aug 13 01:31:26.173068 kubelet[2702]: E0813 01:31:26.172776 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:31:26.880137 containerd[1550]: time="2025-08-13T01:31:26.880094003Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfb5c5311f7e7a167e33f891cf174f8a54d72cf003f8b778922ddc53c0bbc742\" id:\"79102ca1ced8d820a58ceea5117230a191f669f3573d462fd05da79124bac8a1\" pid:5505 exited_at:{seconds:1755048686 nanos:879652855}" Aug 13 01:31:26.886778 kubelet[2702]: E0813 01:31:26.886631 2702 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:35660->127.0.0.1:34981: write tcp 127.0.0.1:35660->127.0.0.1:34981: write: broken pipe Aug 13 01:31:27.175149 kubelet[2702]: E0813 01:31:27.174644 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:31:28.979671 containerd[1550]: time="2025-08-13T01:31:28.979610424Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfb5c5311f7e7a167e33f891cf174f8a54d72cf003f8b778922ddc53c0bbc742\" id:\"81c4149b565bb4b7c221dd353e02ac422e6939d88b64772ad160420d1ff28c28\" pid:5533 exited_at:{seconds:1755048688 nanos:978652068}" Aug 13 01:31:31.060411 containerd[1550]: time="2025-08-13T01:31:31.060364977Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfb5c5311f7e7a167e33f891cf174f8a54d72cf003f8b778922ddc53c0bbc742\" id:\"64c37ab258090a62ef8d3f44cbe0b9b1de37cf102b8a681a1d37d7f02aa24d6f\" pid:5557 exited_at:{seconds:1755048691 nanos:60025988}" Aug 13 01:31:31.126965 sshd[4830]: Connection closed by 147.75.109.163 port 46234 Aug 13 01:31:31.127502 sshd-session[4783]: pam_unix(sshd:session): session closed for user core Aug 13 01:31:31.131399 systemd[1]: sshd@56-172.233.222.9:22-147.75.109.163:46234.service: Deactivated successfully. Aug 13 01:31:31.133769 systemd[1]: session-57.scope: Deactivated successfully. Aug 13 01:31:31.135107 systemd-logind[1515]: Session 57 logged out. Waiting for processes to exit. Aug 13 01:31:31.136949 systemd-logind[1515]: Removed session 57. Aug 13 01:31:35.449476 kubelet[2702]: E0813 01:31:35.448824 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 01:31:40.383384 update_engine[1518]: I20250813 01:31:40.383335 1518 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Aug 13 01:31:40.383384 update_engine[1518]: I20250813 01:31:40.383376 1518 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Aug 13 01:31:40.384002 update_engine[1518]: I20250813 01:31:40.383543 1518 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Aug 13 01:31:40.384039 update_engine[1518]: I20250813 01:31:40.384018 1518 omaha_request_params.cc:62] Current group set to beta Aug 13 01:31:40.384805 update_engine[1518]: I20250813 01:31:40.384117 1518 update_attempter.cc:499] Already updated boot flags. Skipping. Aug 13 01:31:40.384805 update_engine[1518]: I20250813 01:31:40.384159 1518 update_attempter.cc:643] Scheduling an action processor start. Aug 13 01:31:40.384805 update_engine[1518]: I20250813 01:31:40.384176 1518 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 01:31:40.384805 update_engine[1518]: I20250813 01:31:40.384217 1518 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Aug 13 01:31:40.384805 update_engine[1518]: I20250813 01:31:40.384272 1518 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 01:31:40.384805 update_engine[1518]: I20250813 01:31:40.384280 1518 omaha_request_action.cc:272] Request: Aug 13 01:31:40.384805 update_engine[1518]: Aug 13 01:31:40.384805 update_engine[1518]: Aug 13 01:31:40.384805 update_engine[1518]: Aug 13 01:31:40.384805 update_engine[1518]: Aug 13 01:31:40.384805 update_engine[1518]: Aug 13 01:31:40.384805 update_engine[1518]: Aug 13 01:31:40.384805 update_engine[1518]: Aug 13 01:31:40.384805 update_engine[1518]: Aug 13 01:31:40.384805 update_engine[1518]: I20250813 01:31:40.384286 1518 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:31:40.385316 locksmithd[1572]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Aug 13 01:31:40.385920 update_engine[1518]: I20250813 01:31:40.385893 1518 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:31:40.386230 update_engine[1518]: I20250813 01:31:40.386197 1518 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 01:31:40.476216 update_engine[1518]: E20250813 01:31:40.476176 1518 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:31:40.476259 update_engine[1518]: I20250813 01:31:40.476238 1518 libcurl_http_fetcher.cc:283] No HTTP response, retry 1