Aug 13 00:04:47.919983 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:47:31 -00 2025 Aug 13 00:04:47.920004 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 13 00:04:47.920013 kernel: BIOS-provided physical RAM map: Aug 13 00:04:47.920019 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Aug 13 00:04:47.920024 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Aug 13 00:04:47.920033 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 00:04:47.920039 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Aug 13 00:04:47.920045 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Aug 13 00:04:47.920051 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 00:04:47.920056 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 00:04:47.920062 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 00:04:47.920068 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 00:04:47.920073 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Aug 13 00:04:47.920079 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 00:04:47.920088 kernel: NX (Execute Disable) protection: active Aug 13 00:04:47.920094 kernel: APIC: Static calls initialized Aug 13 00:04:47.920100 kernel: SMBIOS 2.8 present. Aug 13 00:04:47.920106 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Aug 13 00:04:47.920112 kernel: Hypervisor detected: KVM Aug 13 00:04:47.920121 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 00:04:47.920127 kernel: kvm-clock: using sched offset of 4637566930 cycles Aug 13 00:04:47.920133 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 00:04:47.920139 kernel: tsc: Detected 2000.000 MHz processor Aug 13 00:04:47.920146 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:04:47.920153 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:04:47.920159 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Aug 13 00:04:47.920165 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 00:04:47.920171 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:04:47.920180 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Aug 13 00:04:47.920186 kernel: Using GB pages for direct mapping Aug 13 00:04:47.920192 kernel: ACPI: Early table checksum verification disabled Aug 13 00:04:47.920198 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Aug 13 00:04:47.920204 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:04:47.920210 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:04:47.920216 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:04:47.920222 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 00:04:47.920229 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:04:47.920237 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:04:47.920244 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:04:47.920250 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:04:47.920260 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Aug 13 00:04:47.920266 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Aug 13 00:04:47.920273 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 00:04:47.920290 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Aug 13 00:04:47.920299 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Aug 13 00:04:47.920306 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Aug 13 00:04:47.920312 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Aug 13 00:04:47.920318 kernel: No NUMA configuration found Aug 13 00:04:47.920325 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Aug 13 00:04:47.920331 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] Aug 13 00:04:47.920338 kernel: Zone ranges: Aug 13 00:04:47.920344 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:04:47.920353 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 00:04:47.920360 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Aug 13 00:04:47.920366 kernel: Movable zone start for each node Aug 13 00:04:47.920372 kernel: Early memory node ranges Aug 13 00:04:47.920379 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 00:04:47.920385 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Aug 13 00:04:47.920391 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Aug 13 00:04:47.920398 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Aug 13 00:04:47.920404 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:04:47.920413 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 00:04:47.920419 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Aug 13 00:04:47.920426 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 00:04:47.920432 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 00:04:47.920439 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:04:47.920445 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 00:04:47.920452 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 00:04:47.920458 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:04:47.920465 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 00:04:47.920474 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 00:04:47.920480 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:04:47.920487 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 00:04:47.920493 kernel: TSC deadline timer available Aug 13 00:04:47.920499 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 00:04:47.920506 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 00:04:47.920512 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 00:04:47.920518 kernel: kvm-guest: setup PV sched yield Aug 13 00:04:47.920525 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 00:04:47.920533 kernel: Booting paravirtualized kernel on KVM Aug 13 00:04:47.920540 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:04:47.920546 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 00:04:47.922572 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 13 00:04:47.922595 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 13 00:04:47.922601 kernel: pcpu-alloc: [0] 0 1 Aug 13 00:04:47.922606 kernel: kvm-guest: PV spinlocks enabled Aug 13 00:04:47.922612 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 00:04:47.922618 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 13 00:04:47.922628 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:04:47.922633 kernel: random: crng init done Aug 13 00:04:47.922639 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:04:47.922644 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:04:47.922651 kernel: Fallback order for Node 0: 0 Aug 13 00:04:47.922657 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Aug 13 00:04:47.922664 kernel: Policy zone: Normal Aug 13 00:04:47.922670 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:04:47.922679 kernel: software IO TLB: area num 2. Aug 13 00:04:47.922686 kernel: Memory: 3964160K/4193772K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43504K init, 1572K bss, 229352K reserved, 0K cma-reserved) Aug 13 00:04:47.922692 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:04:47.922699 kernel: ftrace: allocating 37942 entries in 149 pages Aug 13 00:04:47.922705 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 00:04:47.922713 kernel: Dynamic Preempt: voluntary Aug 13 00:04:47.922721 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:04:47.922729 kernel: rcu: RCU event tracing is enabled. Aug 13 00:04:47.922739 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:04:47.922749 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:04:47.922757 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:04:47.922765 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:04:47.922773 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:04:47.922780 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:04:47.922788 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 00:04:47.922796 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:04:47.922803 kernel: Console: colour VGA+ 80x25 Aug 13 00:04:47.922811 kernel: printk: console [tty0] enabled Aug 13 00:04:47.922819 kernel: printk: console [ttyS0] enabled Aug 13 00:04:47.922829 kernel: ACPI: Core revision 20230628 Aug 13 00:04:47.922837 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 00:04:47.922845 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:04:47.922861 kernel: x2apic enabled Aug 13 00:04:47.922870 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 00:04:47.922877 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 00:04:47.922884 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 00:04:47.922891 kernel: kvm-guest: setup PV IPIs Aug 13 00:04:47.922898 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 00:04:47.922905 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 13 00:04:47.922911 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Aug 13 00:04:47.922921 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 00:04:47.922928 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 00:04:47.922935 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 00:04:47.922942 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:04:47.922949 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:04:47.922958 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:04:47.922965 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 00:04:47.922972 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 00:04:47.922979 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 00:04:47.922986 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 00:04:47.922994 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 00:04:47.923001 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 00:04:47.923008 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Aug 13 00:04:47.923017 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 00:04:47.923024 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:04:47.923030 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:04:47.923037 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:04:47.923044 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 13 00:04:47.923051 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:04:47.923058 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Aug 13 00:04:47.923064 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Aug 13 00:04:47.923071 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:04:47.923080 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:04:47.923087 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 00:04:47.923094 kernel: landlock: Up and running. Aug 13 00:04:47.923100 kernel: SELinux: Initializing. Aug 13 00:04:47.923107 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:04:47.923114 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:04:47.923121 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Aug 13 00:04:47.923128 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:04:47.923135 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:04:47.923144 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:04:47.923151 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 00:04:47.923158 kernel: ... version: 0 Aug 13 00:04:47.923164 kernel: ... bit width: 48 Aug 13 00:04:47.923171 kernel: ... generic registers: 6 Aug 13 00:04:47.923178 kernel: ... value mask: 0000ffffffffffff Aug 13 00:04:47.923185 kernel: ... max period: 00007fffffffffff Aug 13 00:04:47.923192 kernel: ... fixed-purpose events: 0 Aug 13 00:04:47.923198 kernel: ... event mask: 000000000000003f Aug 13 00:04:47.923207 kernel: signal: max sigframe size: 3376 Aug 13 00:04:47.923214 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:04:47.923221 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:04:47.923227 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:04:47.923234 kernel: smpboot: x86: Booting SMP configuration: Aug 13 00:04:47.923241 kernel: .... node #0, CPUs: #1 Aug 13 00:04:47.923247 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:04:47.923254 kernel: smpboot: Max logical packages: 1 Aug 13 00:04:47.923261 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Aug 13 00:04:47.923270 kernel: devtmpfs: initialized Aug 13 00:04:47.923276 kernel: x86/mm: Memory block size: 128MB Aug 13 00:04:47.923283 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:04:47.923290 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:04:47.923297 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:04:47.923303 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:04:47.923310 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:04:47.923317 kernel: audit: type=2000 audit(1755043487.529:1): state=initialized audit_enabled=0 res=1 Aug 13 00:04:47.923324 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:04:47.923332 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:04:47.923339 kernel: cpuidle: using governor menu Aug 13 00:04:47.923346 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:04:47.923352 kernel: dca service started, version 1.12.1 Aug 13 00:04:47.923359 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 13 00:04:47.923366 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 00:04:47.923373 kernel: PCI: Using configuration type 1 for base access Aug 13 00:04:47.923380 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:04:47.923386 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:04:47.923395 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:04:47.923403 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:04:47.923409 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:04:47.923416 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:04:47.923423 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:04:47.923429 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:04:47.923436 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:04:47.923443 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 00:04:47.923450 kernel: ACPI: Interpreter enabled Aug 13 00:04:47.923458 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 00:04:47.923465 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:04:47.923472 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:04:47.923479 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 00:04:47.923486 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 00:04:47.923493 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:04:47.923692 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:04:47.923820 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 00:04:47.923941 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 00:04:47.923951 kernel: PCI host bridge to bus 0000:00 Aug 13 00:04:47.924080 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 00:04:47.924206 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 00:04:47.924326 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 00:04:47.924414 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 13 00:04:47.924499 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 00:04:47.924607 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Aug 13 00:04:47.924695 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:04:47.924806 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 13 00:04:47.924910 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Aug 13 00:04:47.925011 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Aug 13 00:04:47.925107 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Aug 13 00:04:47.925206 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Aug 13 00:04:47.925299 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 00:04:47.925401 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Aug 13 00:04:47.925496 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Aug 13 00:04:47.925716 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Aug 13 00:04:47.925820 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 00:04:47.925929 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 13 00:04:47.926031 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Aug 13 00:04:47.926127 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Aug 13 00:04:47.926223 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 00:04:47.926317 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Aug 13 00:04:47.926419 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 13 00:04:47.926542 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 00:04:47.926703 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 13 00:04:47.926826 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Aug 13 00:04:47.927626 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Aug 13 00:04:47.927760 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 13 00:04:47.927876 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Aug 13 00:04:47.927886 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 00:04:47.927893 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 00:04:47.927900 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 00:04:47.927911 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 00:04:47.927918 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 00:04:47.927924 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 00:04:47.927931 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 00:04:47.927938 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 00:04:47.927944 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 00:04:47.927951 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 00:04:47.927958 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 00:04:47.927964 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 00:04:47.927973 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 00:04:47.927980 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 00:04:47.927987 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 00:04:47.927993 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 00:04:47.928000 kernel: iommu: Default domain type: Translated Aug 13 00:04:47.928007 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:04:47.928013 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:04:47.928020 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 00:04:47.928027 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Aug 13 00:04:47.928035 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Aug 13 00:04:47.928151 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 00:04:47.928263 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 00:04:47.928374 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 00:04:47.928384 kernel: vgaarb: loaded Aug 13 00:04:47.928391 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 00:04:47.928397 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 00:04:47.928405 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 00:04:47.928415 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:04:47.928422 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:04:47.928428 kernel: pnp: PnP ACPI init Aug 13 00:04:47.928550 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 00:04:47.931625 kernel: pnp: PnP ACPI: found 5 devices Aug 13 00:04:47.931635 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:04:47.931643 kernel: NET: Registered PF_INET protocol family Aug 13 00:04:47.931650 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:04:47.931657 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:04:47.931667 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:04:47.931674 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:04:47.931682 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 00:04:47.931689 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:04:47.931696 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:04:47.931702 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:04:47.931709 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:04:47.931716 kernel: NET: Registered PF_XDP protocol family Aug 13 00:04:47.931842 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 00:04:47.931949 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 00:04:47.932051 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 00:04:47.932152 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 13 00:04:47.932254 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 00:04:47.932354 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Aug 13 00:04:47.932363 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:04:47.932370 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 00:04:47.932377 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Aug 13 00:04:47.932387 kernel: Initialise system trusted keyrings Aug 13 00:04:47.932394 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:04:47.932401 kernel: Key type asymmetric registered Aug 13 00:04:47.932408 kernel: Asymmetric key parser 'x509' registered Aug 13 00:04:47.932415 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 00:04:47.932421 kernel: io scheduler mq-deadline registered Aug 13 00:04:47.932428 kernel: io scheduler kyber registered Aug 13 00:04:47.932435 kernel: io scheduler bfq registered Aug 13 00:04:47.932442 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:04:47.932453 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 00:04:47.932460 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 00:04:47.932466 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:04:47.932473 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:04:47.932481 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 00:04:47.932487 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 00:04:47.932529 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 00:04:47.933711 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 00:04:47.933732 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 00:04:47.933849 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 00:04:47.933961 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T00:04:47 UTC (1755043487) Aug 13 00:04:47.934073 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 00:04:47.934083 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 00:04:47.934089 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:04:47.934096 kernel: Segment Routing with IPv6 Aug 13 00:04:47.934103 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:04:47.934110 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:04:47.934121 kernel: Key type dns_resolver registered Aug 13 00:04:47.934128 kernel: IPI shorthand broadcast: enabled Aug 13 00:04:47.934135 kernel: sched_clock: Marking stable (678004600, 206157350)->(929027040, -44865090) Aug 13 00:04:47.934141 kernel: registered taskstats version 1 Aug 13 00:04:47.934148 kernel: Loading compiled-in X.509 certificates Aug 13 00:04:47.934155 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: dfd2b306eb54324ea79eea0261f8d493924aeeeb' Aug 13 00:04:47.934162 kernel: Key type .fscrypt registered Aug 13 00:04:47.934169 kernel: Key type fscrypt-provisioning registered Aug 13 00:04:47.934179 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:04:47.934185 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:04:47.934192 kernel: ima: No architecture policies found Aug 13 00:04:47.934199 kernel: clk: Disabling unused clocks Aug 13 00:04:47.934205 kernel: Freeing unused kernel image (initmem) memory: 43504K Aug 13 00:04:47.934212 kernel: Write protecting the kernel read-only data: 38912k Aug 13 00:04:47.934219 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Aug 13 00:04:47.934226 kernel: Run /init as init process Aug 13 00:04:47.934233 kernel: with arguments: Aug 13 00:04:47.934242 kernel: /init Aug 13 00:04:47.934249 kernel: with environment: Aug 13 00:04:47.934256 kernel: HOME=/ Aug 13 00:04:47.934262 kernel: TERM=linux Aug 13 00:04:47.934269 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:04:47.934277 systemd[1]: Successfully made /usr/ read-only. Aug 13 00:04:47.934287 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:04:47.934295 systemd[1]: Detected virtualization kvm. Aug 13 00:04:47.934305 systemd[1]: Detected architecture x86-64. Aug 13 00:04:47.934312 systemd[1]: Running in initrd. Aug 13 00:04:47.934319 systemd[1]: No hostname configured, using default hostname. Aug 13 00:04:47.934327 systemd[1]: Hostname set to . Aug 13 00:04:47.934334 systemd[1]: Initializing machine ID from random generator. Aug 13 00:04:47.934359 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:04:47.934372 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:04:47.934380 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:04:47.934388 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:04:47.934396 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:04:47.934404 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:04:47.934412 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:04:47.934421 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:04:47.934432 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:04:47.934440 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:04:47.934447 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:04:47.934455 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:04:47.934462 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:04:47.934470 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:04:47.934478 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:04:47.934486 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:04:47.934496 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:04:47.934504 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:04:47.934511 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 00:04:47.934519 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:04:47.934527 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:04:47.934534 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:04:47.934541 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:04:47.934549 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:04:47.934583 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:04:47.934595 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:04:47.934603 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:04:47.934610 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:04:47.934618 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:04:47.934626 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:04:47.934655 systemd-journald[178]: Collecting audit messages is disabled. Aug 13 00:04:47.934677 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:04:47.934688 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:04:47.934699 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:04:47.934707 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:04:47.934715 systemd-journald[178]: Journal started Aug 13 00:04:47.934734 systemd-journald[178]: Runtime Journal (/run/log/journal/afa602a2f5004bf795bf682c476402c9) is 8M, max 78.3M, 70.3M free. Aug 13 00:04:47.931424 systemd-modules-load[179]: Inserted module 'overlay' Aug 13 00:04:47.985975 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:04:47.986007 kernel: Bridge firewalling registered Aug 13 00:04:47.986021 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:04:47.958209 systemd-modules-load[179]: Inserted module 'br_netfilter' Aug 13 00:04:47.985611 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:04:47.986629 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:04:47.987866 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:04:47.994706 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:04:47.996836 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:04:48.019748 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:04:48.031759 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:04:48.033852 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:04:48.035409 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:04:48.036804 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:04:48.039964 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:04:48.046693 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:04:48.050697 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:04:48.058533 dracut-cmdline[212]: dracut-dracut-053 Aug 13 00:04:48.062784 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 13 00:04:48.091479 systemd-resolved[213]: Positive Trust Anchors: Aug 13 00:04:48.092214 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:04:48.092796 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:04:48.097627 systemd-resolved[213]: Defaulting to hostname 'linux'. Aug 13 00:04:48.099864 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:04:48.100444 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:04:48.135594 kernel: SCSI subsystem initialized Aug 13 00:04:48.143591 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:04:48.154589 kernel: iscsi: registered transport (tcp) Aug 13 00:04:48.174860 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:04:48.174898 kernel: QLogic iSCSI HBA Driver Aug 13 00:04:48.230522 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:04:48.236720 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:04:48.261802 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:04:48.261844 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:04:48.262687 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 00:04:48.306584 kernel: raid6: avx2x4 gen() 25794 MB/s Aug 13 00:04:48.324583 kernel: raid6: avx2x2 gen() 24111 MB/s Aug 13 00:04:48.342982 kernel: raid6: avx2x1 gen() 15583 MB/s Aug 13 00:04:48.342998 kernel: raid6: using algorithm avx2x4 gen() 25794 MB/s Aug 13 00:04:48.362005 kernel: raid6: .... xor() 3032 MB/s, rmw enabled Aug 13 00:04:48.362032 kernel: raid6: using avx2x2 recovery algorithm Aug 13 00:04:48.382593 kernel: xor: automatically using best checksumming function avx Aug 13 00:04:48.511592 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:04:48.526706 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:04:48.535783 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:04:48.551388 systemd-udevd[396]: Using default interface naming scheme 'v255'. Aug 13 00:04:48.556460 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:04:48.562700 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:04:48.578370 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Aug 13 00:04:48.612324 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:04:48.616687 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:04:48.680930 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:04:48.692781 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:04:48.701417 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:04:48.707520 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:04:48.710032 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:04:48.711831 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:04:48.718967 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:04:48.739538 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:04:48.760598 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:04:48.769742 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 00:04:48.769768 kernel: AES CTR mode by8 optimization enabled Aug 13 00:04:48.772603 kernel: scsi host0: Virtio SCSI HBA Aug 13 00:04:48.783590 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Aug 13 00:04:48.943106 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:04:48.943288 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:04:48.944072 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:04:48.944861 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:04:48.944991 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:04:48.947722 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:04:48.955880 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:04:48.961631 kernel: libata version 3.00 loaded. Aug 13 00:04:48.959583 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:04:49.010602 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 00:04:49.010806 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 00:04:49.014975 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 13 00:04:49.015143 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 00:04:49.025572 kernel: sd 0:0:0:0: Power-on or device reset occurred Aug 13 00:04:49.025842 kernel: sd 0:0:0:0: [sda] 9297920 512-byte logical blocks: (4.76 GB/4.43 GiB) Aug 13 00:04:49.026030 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 00:04:49.026256 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Aug 13 00:04:49.026438 kernel: scsi host1: ahci Aug 13 00:04:49.026631 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 00:04:49.027691 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:04:49.027711 kernel: GPT:9289727 != 9297919 Aug 13 00:04:49.027722 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:04:49.027736 kernel: GPT:9289727 != 9297919 Aug 13 00:04:49.027744 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:04:49.027753 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:04:49.027762 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 00:04:49.031751 kernel: scsi host2: ahci Aug 13 00:04:49.034585 kernel: scsi host3: ahci Aug 13 00:04:49.036592 kernel: scsi host4: ahci Aug 13 00:04:49.037574 kernel: scsi host5: ahci Aug 13 00:04:49.038617 kernel: scsi host6: ahci Aug 13 00:04:49.038788 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Aug 13 00:04:49.038800 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Aug 13 00:04:49.038810 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Aug 13 00:04:49.038818 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Aug 13 00:04:49.038827 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Aug 13 00:04:49.038836 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Aug 13 00:04:49.071739 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (448) Aug 13 00:04:49.073716 kernel: BTRFS: device fsid 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 devid 1 transid 45 /dev/sda3 scanned by (udev-worker) (439) Aug 13 00:04:49.099347 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Aug 13 00:04:49.156904 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:04:49.170964 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Aug 13 00:04:49.171722 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Aug 13 00:04:49.181490 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Aug 13 00:04:49.190094 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 00:04:49.216755 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:04:49.219718 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:04:49.223267 disk-uuid[558]: Primary Header is updated. Aug 13 00:04:49.223267 disk-uuid[558]: Secondary Entries is updated. Aug 13 00:04:49.223267 disk-uuid[558]: Secondary Header is updated. Aug 13 00:04:49.230622 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:04:49.235723 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:04:49.351596 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 00:04:49.351667 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 00:04:49.351678 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 00:04:49.353578 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 00:04:49.355380 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 00:04:49.356652 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 00:04:50.243824 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:04:50.244982 disk-uuid[560]: The operation has completed successfully. Aug 13 00:04:50.303730 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:04:50.303866 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:04:50.336698 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:04:50.340344 sh[582]: Success Aug 13 00:04:50.354925 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 13 00:04:50.405496 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:04:50.413927 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:04:50.415538 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:04:50.439729 kernel: BTRFS info (device dm-0): first mount of filesystem 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 Aug 13 00:04:50.439772 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:04:50.439792 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 00:04:50.442814 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 00:04:50.444577 kernel: BTRFS info (device dm-0): using free space tree Aug 13 00:04:50.453571 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 13 00:04:50.455152 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:04:50.456251 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:04:50.460690 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:04:50.462511 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:04:50.488402 kernel: BTRFS info (device sda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:04:50.488436 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:04:50.488447 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:04:50.495424 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 00:04:50.495662 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:04:50.503623 kernel: BTRFS info (device sda6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:04:50.506117 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:04:50.514787 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:04:50.583085 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:04:50.592713 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:04:50.598126 ignition[689]: Ignition 2.20.0 Aug 13 00:04:50.598139 ignition[689]: Stage: fetch-offline Aug 13 00:04:50.599849 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:04:50.598175 ignition[689]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:04:50.598187 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:04:50.598280 ignition[689]: parsed url from cmdline: "" Aug 13 00:04:50.598284 ignition[689]: no config URL provided Aug 13 00:04:50.598290 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:04:50.598299 ignition[689]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:04:50.598304 ignition[689]: failed to fetch config: resource requires networking Aug 13 00:04:50.598454 ignition[689]: Ignition finished successfully Aug 13 00:04:50.620267 systemd-networkd[765]: lo: Link UP Aug 13 00:04:50.620278 systemd-networkd[765]: lo: Gained carrier Aug 13 00:04:50.621883 systemd-networkd[765]: Enumeration completed Aug 13 00:04:50.621961 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:04:50.622631 systemd[1]: Reached target network.target - Network. Aug 13 00:04:50.622978 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:04:50.622982 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:04:50.624679 systemd-networkd[765]: eth0: Link UP Aug 13 00:04:50.624683 systemd-networkd[765]: eth0: Gained carrier Aug 13 00:04:50.624689 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:04:50.631694 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 00:04:50.643017 ignition[770]: Ignition 2.20.0 Aug 13 00:04:50.643030 ignition[770]: Stage: fetch Aug 13 00:04:50.643174 ignition[770]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:04:50.643185 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:04:50.643258 ignition[770]: parsed url from cmdline: "" Aug 13 00:04:50.643262 ignition[770]: no config URL provided Aug 13 00:04:50.643267 ignition[770]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:04:50.643275 ignition[770]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:04:50.643490 ignition[770]: PUT http://169.254.169.254/v1/token: attempt #1 Aug 13 00:04:50.643708 ignition[770]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 00:04:50.844128 ignition[770]: PUT http://169.254.169.254/v1/token: attempt #2 Aug 13 00:04:50.844288 ignition[770]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 00:04:51.178608 systemd-networkd[765]: eth0: DHCPv4 address 172.234.216.155/24, gateway 172.234.216.1 acquired from 23.205.167.223 Aug 13 00:04:51.244679 ignition[770]: PUT http://169.254.169.254/v1/token: attempt #3 Aug 13 00:04:51.351802 ignition[770]: PUT result: OK Aug 13 00:04:51.351882 ignition[770]: GET http://169.254.169.254/v1/user-data: attempt #1 Aug 13 00:04:51.484340 ignition[770]: GET result: OK Aug 13 00:04:51.484410 ignition[770]: parsing config with SHA512: 4125c15d55f61f9d5418283dd5af8bcedcbaf1103b77c884f9664f6de102ad9776a873beec6c27cb48e61d60da81b7023c4abcd640b6a80c46e30da77fbd8250 Aug 13 00:04:51.488104 unknown[770]: fetched base config from "system" Aug 13 00:04:51.488738 ignition[770]: fetch: fetch complete Aug 13 00:04:51.488114 unknown[770]: fetched base config from "system" Aug 13 00:04:51.488743 ignition[770]: fetch: fetch passed Aug 13 00:04:51.488119 unknown[770]: fetched user config from "akamai" Aug 13 00:04:51.488780 ignition[770]: Ignition finished successfully Aug 13 00:04:51.492248 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 00:04:51.497753 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:04:51.513722 ignition[777]: Ignition 2.20.0 Aug 13 00:04:51.513737 ignition[777]: Stage: kargs Aug 13 00:04:51.513906 ignition[777]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:04:51.513918 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:04:51.514755 ignition[777]: kargs: kargs passed Aug 13 00:04:51.516809 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:04:51.514791 ignition[777]: Ignition finished successfully Aug 13 00:04:51.523685 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:04:51.533599 ignition[783]: Ignition 2.20.0 Aug 13 00:04:51.533607 ignition[783]: Stage: disks Aug 13 00:04:51.533708 ignition[783]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:04:51.536616 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:04:51.533717 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:04:51.541275 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:04:51.534347 ignition[783]: disks: disks passed Aug 13 00:04:51.559391 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:04:51.534379 ignition[783]: Ignition finished successfully Aug 13 00:04:51.560701 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:04:51.561826 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:04:51.562910 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:04:51.573641 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:04:51.589072 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 00:04:51.591962 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:04:51.599633 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:04:51.678584 kernel: EXT4-fs (sda9): mounted filesystem 27db109b-2440-48a3-909e-fd8973275523 r/w with ordered data mode. Quota mode: none. Aug 13 00:04:51.678684 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:04:51.679516 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:04:51.684612 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:04:51.686648 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:04:51.687392 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 00:04:51.687430 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:04:51.687453 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:04:51.701506 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (800) Aug 13 00:04:51.701530 kernel: BTRFS info (device sda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:04:51.702399 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:04:51.706054 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:04:51.706068 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:04:51.711574 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 00:04:51.711597 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:04:51.713674 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:04:51.715328 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:04:51.753832 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:04:51.758745 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:04:51.762735 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:04:51.766682 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:04:51.850748 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:04:51.859732 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:04:51.863364 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:04:51.867326 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:04:51.870821 kernel: BTRFS info (device sda6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:04:51.887959 ignition[912]: INFO : Ignition 2.20.0 Aug 13 00:04:51.887959 ignition[912]: INFO : Stage: mount Aug 13 00:04:51.889300 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:04:51.889300 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:04:51.889300 ignition[912]: INFO : mount: mount passed Aug 13 00:04:51.889300 ignition[912]: INFO : Ignition finished successfully Aug 13 00:04:51.892639 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:04:51.897737 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:04:51.898960 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:04:52.491767 systemd-networkd[765]: eth0: Gained IPv6LL Aug 13 00:04:52.685714 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:04:52.697900 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (926) Aug 13 00:04:52.697995 kernel: BTRFS info (device sda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:04:52.701656 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:04:52.701674 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:04:52.706927 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 00:04:52.706948 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:04:52.710085 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:04:52.734893 ignition[942]: INFO : Ignition 2.20.0 Aug 13 00:04:52.735653 ignition[942]: INFO : Stage: files Aug 13 00:04:52.736340 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:04:52.737238 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:04:52.738841 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:04:52.740194 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:04:52.740194 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:04:52.742940 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:04:52.743766 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:04:52.743766 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:04:52.743380 unknown[942]: wrote ssh authorized keys file for user: core Aug 13 00:04:52.746237 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 00:04:52.746237 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 13 00:04:52.862458 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:04:53.930904 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 00:04:53.930904 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:04:53.933343 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 00:04:54.081027 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:04:54.223298 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:04:54.223298 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:04:54.225291 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:04:54.225291 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:04:54.225291 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:04:54.225291 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:04:54.225291 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:04:54.225291 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:04:54.225291 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:04:54.225291 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:04:54.225291 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:04:54.225291 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:04:54.225291 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:04:54.225291 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:04:54.225291 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 00:04:54.618863 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 00:04:54.928276 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:04:54.928276 ignition[942]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 00:04:54.930821 ignition[942]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:04:54.930821 ignition[942]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:04:54.930821 ignition[942]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 00:04:54.930821 ignition[942]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 13 00:04:54.930821 ignition[942]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 00:04:54.930821 ignition[942]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 00:04:54.930821 ignition[942]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 13 00:04:54.930821 ignition[942]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:04:54.930821 ignition[942]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:04:54.930821 ignition[942]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:04:54.930821 ignition[942]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:04:54.930821 ignition[942]: INFO : files: files passed Aug 13 00:04:54.943020 ignition[942]: INFO : Ignition finished successfully Aug 13 00:04:54.932851 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:04:54.939882 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:04:54.942700 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:04:54.946067 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:04:54.946169 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:04:54.955548 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:04:54.958027 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:04:54.959071 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:04:54.960359 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:04:54.961243 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:04:54.967669 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:04:54.993870 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:04:54.993995 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:04:54.995224 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:04:54.996225 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:04:54.997710 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:04:55.006684 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:04:55.017610 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:04:55.022704 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:04:55.032971 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:04:55.034679 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:04:55.035333 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:04:55.036452 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:04:55.036552 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:04:55.037919 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:04:55.038735 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:04:55.040069 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:04:55.041181 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:04:55.042184 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:04:55.043541 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:04:55.044884 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:04:55.046134 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:04:55.047352 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:04:55.048498 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:04:55.049550 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:04:55.049664 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:04:55.050886 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:04:55.051621 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:04:55.052633 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:04:55.054590 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:04:55.055709 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:04:55.055803 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:04:55.057279 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:04:55.057384 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:04:55.058132 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:04:55.058228 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:04:55.064686 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:04:55.065758 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:04:55.065876 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:04:55.069147 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:04:55.070648 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:04:55.070760 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:04:55.071362 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:04:55.071465 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:04:55.083347 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:04:55.085587 ignition[995]: INFO : Ignition 2.20.0 Aug 13 00:04:55.085587 ignition[995]: INFO : Stage: umount Aug 13 00:04:55.085587 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:04:55.085587 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:04:55.089641 ignition[995]: INFO : umount: umount passed Aug 13 00:04:55.089641 ignition[995]: INFO : Ignition finished successfully Aug 13 00:04:55.085795 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:04:55.089073 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:04:55.089172 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:04:55.092033 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:04:55.092110 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:04:55.093682 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:04:55.093734 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:04:55.094243 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:04:55.094286 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 00:04:55.094898 systemd[1]: Stopped target network.target - Network. Aug 13 00:04:55.095386 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:04:55.095437 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:04:55.096006 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:04:55.118750 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:04:55.119342 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:04:55.121886 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:04:55.123447 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:04:55.124532 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:04:55.124608 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:04:55.125820 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:04:55.125859 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:04:55.127015 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:04:55.127074 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:04:55.128063 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:04:55.128111 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:04:55.129358 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:04:55.130786 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:04:55.135460 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:04:55.136449 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:04:55.136605 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:04:55.140437 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 00:04:55.140726 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:04:55.140848 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:04:55.143104 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 00:04:55.143353 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:04:55.143467 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:04:55.146258 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:04:55.146308 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:04:55.147420 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:04:55.147472 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:04:55.152687 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:04:55.153238 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:04:55.153293 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:04:55.155868 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:04:55.155918 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:04:55.157392 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:04:55.157441 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:04:55.158188 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:04:55.158235 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:04:55.159690 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:04:55.162580 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:04:55.162644 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:04:55.172448 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:04:55.172604 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:04:55.176317 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:04:55.176511 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:04:55.177963 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:04:55.178010 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:04:55.179113 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:04:55.179152 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:04:55.180647 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:04:55.180700 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:04:55.182242 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:04:55.182291 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:04:55.183412 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:04:55.183458 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:04:55.198719 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:04:55.199251 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:04:55.199307 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:04:55.201902 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:04:55.201953 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:04:55.204263 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 00:04:55.204322 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:04:55.205370 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:04:55.205478 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:04:55.208111 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:04:55.213696 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:04:55.220276 systemd[1]: Switching root. Aug 13 00:04:55.252389 systemd-journald[178]: Journal stopped Aug 13 00:04:56.337320 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Aug 13 00:04:56.337356 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:04:56.337368 kernel: SELinux: policy capability open_perms=1 Aug 13 00:04:56.337378 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:04:56.337387 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:04:56.337400 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:04:56.337410 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:04:56.337419 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:04:56.337428 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:04:56.337437 kernel: audit: type=1403 audit(1755043495.380:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:04:56.337449 systemd[1]: Successfully loaded SELinux policy in 46.660ms. Aug 13 00:04:56.337462 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.547ms. Aug 13 00:04:56.337473 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:04:56.337483 systemd[1]: Detected virtualization kvm. Aug 13 00:04:56.337494 systemd[1]: Detected architecture x86-64. Aug 13 00:04:56.337504 systemd[1]: Detected first boot. Aug 13 00:04:56.337516 systemd[1]: Initializing machine ID from random generator. Aug 13 00:04:56.337527 zram_generator::config[1041]: No configuration found. Aug 13 00:04:56.337537 kernel: Guest personality initialized and is inactive Aug 13 00:04:56.337547 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 00:04:56.339685 kernel: Initialized host personality Aug 13 00:04:56.339707 kernel: NET: Registered PF_VSOCK protocol family Aug 13 00:04:56.339719 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:04:56.339737 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 00:04:56.339748 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:04:56.339758 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 00:04:56.339768 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:04:56.339778 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:04:56.339788 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:04:56.339798 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:04:56.339811 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:04:56.339821 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:04:56.339831 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:04:56.339842 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:04:56.339851 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:04:56.339861 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:04:56.339871 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:04:56.339881 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:04:56.339891 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:04:56.339904 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:04:56.339917 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:04:56.339928 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 00:04:56.339938 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:04:56.339948 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 00:04:56.339958 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 00:04:56.339969 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 00:04:56.339982 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:04:56.339992 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:04:56.340002 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:04:56.340012 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:04:56.340023 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:04:56.340033 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:04:56.340043 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:04:56.340053 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 00:04:56.340064 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:04:56.340076 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:04:56.340087 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:04:56.340097 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:04:56.340107 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:04:56.340120 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:04:56.340131 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:04:56.340141 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:04:56.340152 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:04:56.340163 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:04:56.340173 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:04:56.340184 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:04:56.340194 systemd[1]: Reached target machines.target - Containers. Aug 13 00:04:56.340208 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:04:56.340219 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:04:56.340229 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:04:56.340239 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:04:56.340249 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:04:56.340259 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:04:56.340270 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:04:56.340280 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:04:56.340290 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:04:56.340317 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:04:56.340328 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:04:56.340338 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 00:04:56.340348 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:04:56.340358 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:04:56.340368 kernel: fuse: init (API version 7.39) Aug 13 00:04:56.340379 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:04:56.340392 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:04:56.340402 kernel: loop: module loaded Aug 13 00:04:56.340412 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:04:56.340423 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:04:56.340433 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:04:56.340444 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 00:04:56.340454 kernel: ACPI: bus type drm_connector registered Aug 13 00:04:56.340468 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:04:56.340478 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:04:56.340491 systemd[1]: Stopped verity-setup.service. Aug 13 00:04:56.340501 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:04:56.340512 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:04:56.340522 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:04:56.340532 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:04:56.340542 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:04:56.340724 systemd-journald[1127]: Collecting audit messages is disabled. Aug 13 00:04:56.340756 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:04:56.340768 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:04:56.340778 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:04:56.340788 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:04:56.340799 systemd-journald[1127]: Journal started Aug 13 00:04:56.340822 systemd-journald[1127]: Runtime Journal (/run/log/journal/5659b3285d8344d99224c285ece40836) is 8M, max 78.3M, 70.3M free. Aug 13 00:04:55.975159 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:04:55.986340 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 00:04:55.987004 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:04:56.349608 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:04:56.347447 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:04:56.347782 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:04:56.348868 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:04:56.349071 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:04:56.351800 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:04:56.352012 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:04:56.353021 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:04:56.353223 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:04:56.355174 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:04:56.355389 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:04:56.357173 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:04:56.357400 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:04:56.359263 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:04:56.361089 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:04:56.362274 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:04:56.363239 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 00:04:56.380321 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:04:56.388207 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:04:56.395855 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:04:56.396496 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:04:56.396601 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:04:56.398072 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 00:04:56.401660 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:04:56.405698 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:04:56.406523 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:04:56.413796 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:04:56.416726 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:04:56.418650 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:04:56.421716 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:04:56.422353 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:04:56.425698 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:04:56.429433 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:04:56.444518 systemd-journald[1127]: Time spent on flushing to /var/log/journal/5659b3285d8344d99224c285ece40836 is 48.780ms for 988 entries. Aug 13 00:04:56.444518 systemd-journald[1127]: System Journal (/var/log/journal/5659b3285d8344d99224c285ece40836) is 8M, max 195.6M, 187.6M free. Aug 13 00:04:56.503733 systemd-journald[1127]: Received client request to flush runtime journal. Aug 13 00:04:56.503780 kernel: loop0: detected capacity change from 0 to 8 Aug 13 00:04:56.435723 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:04:56.441488 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:04:56.453661 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:04:56.454924 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:04:56.482609 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:04:56.483882 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:04:56.485405 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:04:56.495690 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 00:04:56.504863 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 00:04:56.525642 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:04:56.531095 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:04:56.548173 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:04:56.550248 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 00:04:56.560092 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 00:04:56.569586 kernel: loop1: detected capacity change from 0 to 224512 Aug 13 00:04:56.588188 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:04:56.596780 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:04:56.620867 kernel: loop2: detected capacity change from 0 to 138176 Aug 13 00:04:56.640216 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Aug 13 00:04:56.641101 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Aug 13 00:04:56.650112 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:04:56.681622 kernel: loop3: detected capacity change from 0 to 147912 Aug 13 00:04:56.733590 kernel: loop4: detected capacity change from 0 to 8 Aug 13 00:04:56.739114 kernel: loop5: detected capacity change from 0 to 224512 Aug 13 00:04:56.761784 kernel: loop6: detected capacity change from 0 to 138176 Aug 13 00:04:56.789588 kernel: loop7: detected capacity change from 0 to 147912 Aug 13 00:04:56.805978 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Aug 13 00:04:56.806624 (sd-merge)[1191]: Merged extensions into '/usr'. Aug 13 00:04:56.812117 systemd[1]: Reload requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:04:56.812133 systemd[1]: Reloading... Aug 13 00:04:56.948806 zram_generator::config[1222]: No configuration found. Aug 13 00:04:56.996369 ldconfig[1160]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:04:57.064975 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:04:57.124405 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:04:57.124830 systemd[1]: Reloading finished in 312 ms. Aug 13 00:04:57.141361 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:04:57.142669 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:04:57.143620 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:04:57.156111 systemd[1]: Starting ensure-sysext.service... Aug 13 00:04:57.159709 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:04:57.163718 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:04:57.177683 systemd[1]: Reload requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:04:57.177699 systemd[1]: Reloading... Aug 13 00:04:57.184584 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:04:57.184842 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:04:57.187046 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:04:57.187388 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Aug 13 00:04:57.187753 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Aug 13 00:04:57.195164 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:04:57.195302 systemd-tmpfiles[1264]: Skipping /boot Aug 13 00:04:57.212236 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:04:57.212308 systemd-tmpfiles[1264]: Skipping /boot Aug 13 00:04:57.228948 systemd-udevd[1265]: Using default interface naming scheme 'v255'. Aug 13 00:04:57.276778 zram_generator::config[1289]: No configuration found. Aug 13 00:04:57.438942 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 00:04:57.459415 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 00:04:57.462305 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:04:57.467748 kernel: ACPI: button: Power Button [PWRF] Aug 13 00:04:57.497870 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 00:04:57.498183 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 13 00:04:57.498379 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 00:04:57.500747 kernel: EDAC MC: Ver: 3.0.0 Aug 13 00:04:57.537598 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (1310) Aug 13 00:04:57.579604 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:04:57.587192 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 00:04:57.587457 systemd[1]: Reloading finished in 409 ms. Aug 13 00:04:57.597944 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:04:57.599367 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:04:57.631413 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 00:04:57.636027 systemd[1]: Finished ensure-sysext.service. Aug 13 00:04:57.656868 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 00:04:57.660366 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:04:57.665740 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:04:57.669711 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:04:57.670531 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:04:57.672745 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 00:04:57.676808 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:04:57.679764 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:04:57.683529 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:04:57.687092 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:04:57.688818 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:04:57.696035 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:04:57.702139 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:04:57.704348 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:04:57.714638 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:04:57.720993 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:04:57.726877 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 00:04:57.738737 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:04:57.744239 lvm[1374]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:04:57.746894 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:04:57.748053 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:04:57.750894 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:04:57.751811 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:04:57.753284 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:04:57.754998 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:04:57.756120 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:04:57.756815 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:04:57.758114 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:04:57.759548 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:04:57.760934 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:04:57.773014 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:04:57.773092 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:04:57.782040 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:04:57.787663 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 00:04:57.790441 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:04:57.803139 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 00:04:57.808021 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:04:57.811410 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:04:57.812214 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:04:57.814459 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:04:57.823072 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:04:57.833854 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:04:57.857341 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:04:57.864812 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 00:04:57.868852 augenrules[1420]: No rules Aug 13 00:04:57.869242 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:04:57.869736 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:04:57.875732 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:04:57.971827 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:04:58.008133 systemd-resolved[1387]: Positive Trust Anchors: Aug 13 00:04:58.008626 systemd-resolved[1387]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:04:58.008695 systemd-resolved[1387]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:04:58.010584 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 00:04:58.011348 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:04:58.013718 systemd-resolved[1387]: Defaulting to hostname 'linux'. Aug 13 00:04:58.016245 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:04:58.016992 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:04:58.017635 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:04:58.018339 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:04:58.019036 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:04:58.019892 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:04:58.020696 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:04:58.021316 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:04:58.021431 systemd-networkd[1386]: lo: Link UP Aug 13 00:04:58.021444 systemd-networkd[1386]: lo: Gained carrier Aug 13 00:04:58.022216 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:04:58.022244 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:04:58.022758 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:04:58.023636 systemd-networkd[1386]: Enumeration completed Aug 13 00:04:58.024335 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:04:58.024346 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:04:58.024355 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:04:58.025888 systemd-networkd[1386]: eth0: Link UP Aug 13 00:04:58.025892 systemd-networkd[1386]: eth0: Gained carrier Aug 13 00:04:58.025904 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:04:58.026848 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:04:58.029814 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 00:04:58.030621 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 00:04:58.031186 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 00:04:58.034461 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:04:58.035603 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 00:04:58.037007 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:04:58.037914 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:04:58.038690 systemd[1]: Reached target network.target - Network. Aug 13 00:04:58.039180 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:04:58.039841 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:04:58.040696 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:04:58.040736 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:04:58.050629 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:04:58.052882 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 00:04:58.056532 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:04:58.059701 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:04:58.076749 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:04:58.078073 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:04:58.080183 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:04:58.086882 jq[1445]: false Aug 13 00:04:58.090637 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:04:58.094717 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:04:58.103853 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:04:58.109103 dbus-daemon[1444]: [system] SELinux support is enabled Aug 13 00:04:58.111673 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:04:58.125351 extend-filesystems[1446]: Found loop4 Aug 13 00:04:58.125351 extend-filesystems[1446]: Found loop5 Aug 13 00:04:58.125351 extend-filesystems[1446]: Found loop6 Aug 13 00:04:58.125351 extend-filesystems[1446]: Found loop7 Aug 13 00:04:58.125351 extend-filesystems[1446]: Found sda Aug 13 00:04:58.125351 extend-filesystems[1446]: Found sda1 Aug 13 00:04:58.125351 extend-filesystems[1446]: Found sda2 Aug 13 00:04:58.125351 extend-filesystems[1446]: Found sda3 Aug 13 00:04:58.125351 extend-filesystems[1446]: Found usr Aug 13 00:04:58.125351 extend-filesystems[1446]: Found sda4 Aug 13 00:04:58.125351 extend-filesystems[1446]: Found sda6 Aug 13 00:04:58.125351 extend-filesystems[1446]: Found sda7 Aug 13 00:04:58.125351 extend-filesystems[1446]: Found sda9 Aug 13 00:04:58.125351 extend-filesystems[1446]: Checking size of /dev/sda9 Aug 13 00:04:58.202970 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 555003 blocks Aug 13 00:04:58.202995 kernel: EXT4-fs (sda9): resized filesystem to 555003 Aug 13 00:04:58.124886 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 00:04:58.203093 extend-filesystems[1446]: Resized partition /dev/sda9 Aug 13 00:04:58.128686 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:04:58.204731 extend-filesystems[1468]: resize2fs 1.47.1 (20-May-2024) Aug 13 00:04:58.204731 extend-filesystems[1468]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 00:04:58.204731 extend-filesystems[1468]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 00:04:58.204731 extend-filesystems[1468]: The filesystem on /dev/sda9 is now 555003 (4k) blocks long. Aug 13 00:04:58.129972 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:04:58.215793 extend-filesystems[1446]: Resized filesystem in /dev/sda9 Aug 13 00:04:58.216341 coreos-metadata[1443]: Aug 13 00:04:58.212 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 00:04:58.222625 update_engine[1465]: I20250813 00:04:58.163908 1465 main.cc:92] Flatcar Update Engine starting Aug 13 00:04:58.222625 update_engine[1465]: I20250813 00:04:58.165239 1465 update_check_scheduler.cc:74] Next update check in 10m31s Aug 13 00:04:58.130418 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:04:58.222970 jq[1470]: true Aug 13 00:04:58.139792 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:04:58.174865 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:04:58.176225 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:04:58.187055 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:04:58.188657 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:04:58.189005 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:04:58.189223 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:04:58.197990 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:04:58.198238 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:04:58.206333 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:04:58.207539 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:04:58.246970 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:04:58.247969 (ntainerd)[1483]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:04:58.248301 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:04:58.248340 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:04:58.250822 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:04:58.250838 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:04:58.260850 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:04:58.263201 jq[1478]: true Aug 13 00:04:58.264245 tar[1477]: linux-amd64/LICENSE Aug 13 00:04:58.274640 tar[1477]: linux-amd64/helm Aug 13 00:04:58.304999 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 00:04:58.340574 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (1312) Aug 13 00:04:58.348145 sshd_keygen[1474]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:04:58.363240 bash[1506]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:04:58.368114 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:04:58.380424 systemd[1]: Starting sshkeys.service... Aug 13 00:04:58.398500 systemd-logind[1455]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 00:04:58.398536 systemd-logind[1455]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:04:58.401711 systemd-logind[1455]: New seat seat0. Aug 13 00:04:58.404088 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:04:58.423925 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 00:04:58.434450 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 00:04:58.438184 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:04:58.474394 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:04:58.512366 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:04:58.512680 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:04:58.524845 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:04:58.541331 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:04:58.549910 locksmithd[1489]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:04:58.551685 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:04:58.559944 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 00:04:58.560811 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:04:58.565640 systemd-networkd[1386]: eth0: DHCPv4 address 172.234.216.155/24, gateway 172.234.216.1 acquired from 23.205.167.223 Aug 13 00:04:58.566172 dbus-daemon[1444]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1386 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 00:04:58.574364 systemd-timesyncd[1388]: Network configuration changed, trying to establish connection. Aug 13 00:04:58.576922 coreos-metadata[1516]: Aug 13 00:04:58.574 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 00:04:58.577649 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 00:04:58.671027 containerd[1483]: time="2025-08-13T00:04:58.670917200Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Aug 13 00:04:58.698576 coreos-metadata[1516]: Aug 13 00:04:58.697 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Aug 13 00:04:58.702909 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 00:04:58.705129 dbus-daemon[1444]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 00:04:58.705936 dbus-daemon[1444]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1538 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 00:04:58.711428 containerd[1483]: time="2025-08-13T00:04:58.711397940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:04:58.713380 containerd[1483]: time="2025-08-13T00:04:58.713355290Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:04:58.713438 containerd[1483]: time="2025-08-13T00:04:58.713425390Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:04:58.713486 containerd[1483]: time="2025-08-13T00:04:58.713474760Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:04:58.713806 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 00:04:58.714732 containerd[1483]: time="2025-08-13T00:04:58.714713810Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 00:04:58.714855 containerd[1483]: time="2025-08-13T00:04:58.714792810Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 00:04:58.715353 containerd[1483]: time="2025-08-13T00:04:58.715333800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:04:58.715405 containerd[1483]: time="2025-08-13T00:04:58.715393040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:04:58.715691 containerd[1483]: time="2025-08-13T00:04:58.715671980Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:04:58.715744 containerd[1483]: time="2025-08-13T00:04:58.715732160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:04:58.715789 containerd[1483]: time="2025-08-13T00:04:58.715777190Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:04:58.715843 containerd[1483]: time="2025-08-13T00:04:58.715831110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:04:58.715971 containerd[1483]: time="2025-08-13T00:04:58.715956250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:04:58.716235 containerd[1483]: time="2025-08-13T00:04:58.716218640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:04:58.716618 containerd[1483]: time="2025-08-13T00:04:58.716599460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:04:58.716666 containerd[1483]: time="2025-08-13T00:04:58.716655140Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:04:58.716795 containerd[1483]: time="2025-08-13T00:04:58.716780780Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:04:58.716892 containerd[1483]: time="2025-08-13T00:04:58.716878750Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:04:58.720607 containerd[1483]: time="2025-08-13T00:04:58.720383980Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:04:58.720689 containerd[1483]: time="2025-08-13T00:04:58.720675610Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:04:58.720758 containerd[1483]: time="2025-08-13T00:04:58.720742980Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 00:04:58.720808 containerd[1483]: time="2025-08-13T00:04:58.720796240Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 00:04:58.720850 containerd[1483]: time="2025-08-13T00:04:58.720839850Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:04:58.720997 containerd[1483]: time="2025-08-13T00:04:58.720981840Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:04:58.721234 containerd[1483]: time="2025-08-13T00:04:58.721219550Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:04:58.721373 containerd[1483]: time="2025-08-13T00:04:58.721357780Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 00:04:58.721423 containerd[1483]: time="2025-08-13T00:04:58.721411970Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 00:04:58.721479 containerd[1483]: time="2025-08-13T00:04:58.721466700Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 00:04:58.721523 containerd[1483]: time="2025-08-13T00:04:58.721512560Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:04:58.721743 containerd[1483]: time="2025-08-13T00:04:58.721553500Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:04:58.721789 containerd[1483]: time="2025-08-13T00:04:58.721778710Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:04:58.721830 containerd[1483]: time="2025-08-13T00:04:58.721820250Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:04:58.721889 containerd[1483]: time="2025-08-13T00:04:58.721876820Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:04:58.721940 containerd[1483]: time="2025-08-13T00:04:58.721929000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:04:58.721981 containerd[1483]: time="2025-08-13T00:04:58.721971060Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:04:58.722019 containerd[1483]: time="2025-08-13T00:04:58.722009140Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:04:58.722065 containerd[1483]: time="2025-08-13T00:04:58.722054900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:04:58.722114 containerd[1483]: time="2025-08-13T00:04:58.722103540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:04:58.722166 containerd[1483]: time="2025-08-13T00:04:58.722154510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:04:58.722217 containerd[1483]: time="2025-08-13T00:04:58.722205760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:04:58.722258 containerd[1483]: time="2025-08-13T00:04:58.722248460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:04:58.722299 containerd[1483]: time="2025-08-13T00:04:58.722288970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:04:58.722347 containerd[1483]: time="2025-08-13T00:04:58.722336090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:04:58.722398 containerd[1483]: time="2025-08-13T00:04:58.722387030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:04:58.722689 containerd[1483]: time="2025-08-13T00:04:58.722674410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 00:04:58.722751 containerd[1483]: time="2025-08-13T00:04:58.722739350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 00:04:58.725249 containerd[1483]: time="2025-08-13T00:04:58.723814650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:04:58.725249 containerd[1483]: time="2025-08-13T00:04:58.723837320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 00:04:58.725249 containerd[1483]: time="2025-08-13T00:04:58.723848710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:04:58.725249 containerd[1483]: time="2025-08-13T00:04:58.723860060Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 00:04:58.725249 containerd[1483]: time="2025-08-13T00:04:58.723876290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 00:04:58.725249 containerd[1483]: time="2025-08-13T00:04:58.723902490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:04:58.725249 containerd[1483]: time="2025-08-13T00:04:58.723911960Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:04:58.725249 containerd[1483]: time="2025-08-13T00:04:58.723956200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:04:58.725249 containerd[1483]: time="2025-08-13T00:04:58.723968820Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 00:04:58.725249 containerd[1483]: time="2025-08-13T00:04:58.723977260Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:04:58.725249 containerd[1483]: time="2025-08-13T00:04:58.723986870Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 00:04:58.725249 containerd[1483]: time="2025-08-13T00:04:58.723994470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:04:58.725249 containerd[1483]: time="2025-08-13T00:04:58.724004490Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 00:04:58.725249 containerd[1483]: time="2025-08-13T00:04:58.724012850Z" level=info msg="NRI interface is disabled by configuration." Aug 13 00:04:58.725490 containerd[1483]: time="2025-08-13T00:04:58.724021500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:04:58.725509 containerd[1483]: time="2025-08-13T00:04:58.724240080Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:04:58.725509 containerd[1483]: time="2025-08-13T00:04:58.724299360Z" level=info msg="Connect containerd service" Aug 13 00:04:58.725509 containerd[1483]: time="2025-08-13T00:04:58.724332230Z" level=info msg="using legacy CRI server" Aug 13 00:04:58.725509 containerd[1483]: time="2025-08-13T00:04:58.724338000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:04:58.725509 containerd[1483]: time="2025-08-13T00:04:58.724624250Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:04:58.726150 containerd[1483]: time="2025-08-13T00:04:58.726129200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:04:58.726314 containerd[1483]: time="2025-08-13T00:04:58.726288610Z" level=info msg="Start subscribing containerd event" Aug 13 00:04:58.726384 containerd[1483]: time="2025-08-13T00:04:58.726370880Z" level=info msg="Start recovering state" Aug 13 00:04:58.726470 containerd[1483]: time="2025-08-13T00:04:58.726457920Z" level=info msg="Start event monitor" Aug 13 00:04:58.726512 containerd[1483]: time="2025-08-13T00:04:58.726502280Z" level=info msg="Start snapshots syncer" Aug 13 00:04:58.726578 containerd[1483]: time="2025-08-13T00:04:58.726546210Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:04:58.726622 containerd[1483]: time="2025-08-13T00:04:58.726611360Z" level=info msg="Start streaming server" Aug 13 00:04:58.726981 containerd[1483]: time="2025-08-13T00:04:58.726965290Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:04:58.727137 containerd[1483]: time="2025-08-13T00:04:58.727123000Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:04:58.727297 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:04:58.729654 containerd[1483]: time="2025-08-13T00:04:58.729636850Z" level=info msg="containerd successfully booted in 0.061877s" Aug 13 00:04:58.738517 polkitd[1542]: Started polkitd version 121 Aug 13 00:04:58.743409 polkitd[1542]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 00:04:58.743467 polkitd[1542]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 00:04:58.744122 polkitd[1542]: Finished loading, compiling and executing 2 rules Aug 13 00:04:58.744873 dbus-daemon[1444]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 00:04:58.744977 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 00:04:58.746241 polkitd[1542]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 00:04:58.756110 systemd-hostnamed[1538]: Hostname set to <172-234-216-155> (transient) Aug 13 00:04:58.756582 systemd-resolved[1387]: System hostname changed to '172-234-216-155'. Aug 13 00:04:58.860747 coreos-metadata[1516]: Aug 13 00:04:58.860 INFO Fetch successful Aug 13 00:04:58.877927 update-ssh-keys[1554]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:04:58.879258 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 00:04:58.882172 systemd[1]: Finished sshkeys.service. Aug 13 00:04:58.921297 tar[1477]: linux-amd64/README.md Aug 13 00:04:58.932794 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:04:59.224647 coreos-metadata[1443]: Aug 13 00:04:59.224 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 00:04:59.326067 coreos-metadata[1443]: Aug 13 00:04:59.326 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Aug 13 00:04:59.531797 systemd-networkd[1386]: eth0: Gained IPv6LL Aug 13 00:04:59.532748 systemd-timesyncd[1388]: Network configuration changed, trying to establish connection. Aug 13 00:04:59.533779 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:04:59.535403 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:04:59.541093 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:04:59.543680 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:04:59.547145 coreos-metadata[1443]: Aug 13 00:04:59.545 INFO Fetch successful Aug 13 00:04:59.547145 coreos-metadata[1443]: Aug 13 00:04:59.545 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Aug 13 00:04:59.572775 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:04:59.849045 coreos-metadata[1443]: Aug 13 00:04:59.848 INFO Fetch successful Aug 13 00:04:59.918884 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 00:04:59.920139 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:05:00.428840 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:05:00.430056 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:05:00.437910 systemd[1]: Startup finished in 805ms (kernel) + 7.680s (initrd) + 5.102s (userspace) = 13.588s. Aug 13 00:05:00.481424 (kubelet)[1598]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:05:01.001760 kubelet[1598]: E0813 00:05:01.001627 1598 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:05:01.006368 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:05:01.006600 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:05:01.007175 systemd[1]: kubelet.service: Consumed 906ms CPU time, 265.9M memory peak. Aug 13 00:05:01.034685 systemd-timesyncd[1388]: Network configuration changed, trying to establish connection. Aug 13 00:05:02.412746 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:05:02.417764 systemd[1]: Started sshd@0-172.234.216.155:22-139.178.89.65:37500.service - OpenSSH per-connection server daemon (139.178.89.65:37500). Aug 13 00:05:02.668160 systemd-timesyncd[1388]: Network configuration changed, trying to establish connection. Aug 13 00:05:02.753438 sshd[1609]: Accepted publickey for core from 139.178.89.65 port 37500 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:05:02.755373 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:05:02.761603 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:05:02.774329 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:05:02.782521 systemd-logind[1455]: New session 1 of user core. Aug 13 00:05:02.788718 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:05:02.796193 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:05:02.800469 (systemd)[1613]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:02.803682 systemd-logind[1455]: New session c1 of user core. Aug 13 00:05:02.937262 systemd[1613]: Queued start job for default target default.target. Aug 13 00:05:02.946870 systemd[1613]: Created slice app.slice - User Application Slice. Aug 13 00:05:02.946899 systemd[1613]: Reached target paths.target - Paths. Aug 13 00:05:02.946941 systemd[1613]: Reached target timers.target - Timers. Aug 13 00:05:02.948302 systemd[1613]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:05:02.958210 systemd[1613]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:05:02.958273 systemd[1613]: Reached target sockets.target - Sockets. Aug 13 00:05:02.958309 systemd[1613]: Reached target basic.target - Basic System. Aug 13 00:05:02.958352 systemd[1613]: Reached target default.target - Main User Target. Aug 13 00:05:02.958382 systemd[1613]: Startup finished in 147ms. Aug 13 00:05:02.958780 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:05:02.969697 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:05:03.229760 systemd[1]: Started sshd@1-172.234.216.155:22-139.178.89.65:37514.service - OpenSSH per-connection server daemon (139.178.89.65:37514). Aug 13 00:05:03.561969 sshd[1624]: Accepted publickey for core from 139.178.89.65 port 37514 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:05:03.563408 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:05:03.568621 systemd-logind[1455]: New session 2 of user core. Aug 13 00:05:03.577680 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:05:03.808408 sshd[1626]: Connection closed by 139.178.89.65 port 37514 Aug 13 00:05:03.808902 sshd-session[1624]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:03.813253 systemd[1]: sshd@1-172.234.216.155:22-139.178.89.65:37514.service: Deactivated successfully. Aug 13 00:05:03.815545 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:05:03.816537 systemd-logind[1455]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:05:03.817322 systemd-logind[1455]: Removed session 2. Aug 13 00:05:03.872827 systemd[1]: Started sshd@2-172.234.216.155:22-139.178.89.65:37522.service - OpenSSH per-connection server daemon (139.178.89.65:37522). Aug 13 00:05:04.215650 sshd[1632]: Accepted publickey for core from 139.178.89.65 port 37522 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:05:04.217399 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:05:04.222876 systemd-logind[1455]: New session 3 of user core. Aug 13 00:05:04.229690 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:05:04.466702 sshd[1634]: Connection closed by 139.178.89.65 port 37522 Aug 13 00:05:04.467423 sshd-session[1632]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:04.471415 systemd[1]: sshd@2-172.234.216.155:22-139.178.89.65:37522.service: Deactivated successfully. Aug 13 00:05:04.473961 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:05:04.474663 systemd-logind[1455]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:05:04.475841 systemd-logind[1455]: Removed session 3. Aug 13 00:05:04.529967 systemd[1]: Started sshd@3-172.234.216.155:22-139.178.89.65:37532.service - OpenSSH per-connection server daemon (139.178.89.65:37532). Aug 13 00:05:04.857411 sshd[1640]: Accepted publickey for core from 139.178.89.65 port 37532 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:05:04.858766 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:05:04.863781 systemd-logind[1455]: New session 4 of user core. Aug 13 00:05:04.869690 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:05:05.103129 sshd[1642]: Connection closed by 139.178.89.65 port 37532 Aug 13 00:05:05.103791 sshd-session[1640]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:05.108057 systemd[1]: sshd@3-172.234.216.155:22-139.178.89.65:37532.service: Deactivated successfully. Aug 13 00:05:05.110432 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:05:05.112321 systemd-logind[1455]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:05:05.113532 systemd-logind[1455]: Removed session 4. Aug 13 00:05:05.170785 systemd[1]: Started sshd@4-172.234.216.155:22-139.178.89.65:37538.service - OpenSSH per-connection server daemon (139.178.89.65:37538). Aug 13 00:05:05.507706 sshd[1648]: Accepted publickey for core from 139.178.89.65 port 37538 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:05:05.509356 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:05:05.516323 systemd-logind[1455]: New session 5 of user core. Aug 13 00:05:05.522872 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:05:05.721019 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:05:05.721403 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:05:05.744505 sudo[1651]: pam_unix(sudo:session): session closed for user root Aug 13 00:05:05.798183 sshd[1650]: Connection closed by 139.178.89.65 port 37538 Aug 13 00:05:05.799945 sshd-session[1648]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:05.804899 systemd[1]: sshd@4-172.234.216.155:22-139.178.89.65:37538.service: Deactivated successfully. Aug 13 00:05:05.807466 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:05:05.809613 systemd-logind[1455]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:05:05.811370 systemd-logind[1455]: Removed session 5. Aug 13 00:05:05.857686 systemd[1]: Started sshd@5-172.234.216.155:22-139.178.89.65:37540.service - OpenSSH per-connection server daemon (139.178.89.65:37540). Aug 13 00:05:06.215326 sshd[1657]: Accepted publickey for core from 139.178.89.65 port 37540 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:05:06.217042 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:05:06.223400 systemd-logind[1455]: New session 6 of user core. Aug 13 00:05:06.233929 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:05:06.414490 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:05:06.415142 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:05:06.419471 sudo[1661]: pam_unix(sudo:session): session closed for user root Aug 13 00:05:06.427109 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 00:05:06.427418 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:05:06.443314 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:05:06.475724 augenrules[1683]: No rules Aug 13 00:05:06.475914 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:05:06.476170 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:05:06.478014 sudo[1660]: pam_unix(sudo:session): session closed for user root Aug 13 00:05:06.528415 sshd[1659]: Connection closed by 139.178.89.65 port 37540 Aug 13 00:05:06.529090 sshd-session[1657]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:06.532996 systemd-logind[1455]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:05:06.533817 systemd[1]: sshd@5-172.234.216.155:22-139.178.89.65:37540.service: Deactivated successfully. Aug 13 00:05:06.536399 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:05:06.537227 systemd-logind[1455]: Removed session 6. Aug 13 00:05:06.592903 systemd[1]: Started sshd@6-172.234.216.155:22-139.178.89.65:37554.service - OpenSSH per-connection server daemon (139.178.89.65:37554). Aug 13 00:05:06.921316 sshd[1692]: Accepted publickey for core from 139.178.89.65 port 37554 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:05:06.923199 sshd-session[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:05:06.928233 systemd-logind[1455]: New session 7 of user core. Aug 13 00:05:06.936679 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:05:07.118075 sudo[1695]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:05:07.118415 sudo[1695]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:05:07.397979 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:05:07.398541 (dockerd)[1712]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:05:07.661005 dockerd[1712]: time="2025-08-13T00:05:07.660850960Z" level=info msg="Starting up" Aug 13 00:05:07.736662 systemd[1]: var-lib-docker-metacopy\x2dcheck3253241975-merged.mount: Deactivated successfully. Aug 13 00:05:07.756121 dockerd[1712]: time="2025-08-13T00:05:07.756065190Z" level=info msg="Loading containers: start." Aug 13 00:05:07.917730 kernel: Initializing XFRM netlink socket Aug 13 00:05:07.942301 systemd-timesyncd[1388]: Network configuration changed, trying to establish connection. Aug 13 00:05:07.943168 systemd-timesyncd[1388]: Network configuration changed, trying to establish connection. Aug 13 00:05:07.953385 systemd-timesyncd[1388]: Network configuration changed, trying to establish connection. Aug 13 00:05:08.000246 systemd-networkd[1386]: docker0: Link UP Aug 13 00:05:08.000463 systemd-timesyncd[1388]: Network configuration changed, trying to establish connection. Aug 13 00:05:08.030948 dockerd[1712]: time="2025-08-13T00:05:08.030912550Z" level=info msg="Loading containers: done." Aug 13 00:05:08.045794 dockerd[1712]: time="2025-08-13T00:05:08.045753290Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:05:08.045936 dockerd[1712]: time="2025-08-13T00:05:08.045815700Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Aug 13 00:05:08.045936 dockerd[1712]: time="2025-08-13T00:05:08.045917320Z" level=info msg="Daemon has completed initialization" Aug 13 00:05:08.074258 dockerd[1712]: time="2025-08-13T00:05:08.074206170Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:05:08.074373 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:05:08.731771 containerd[1483]: time="2025-08-13T00:05:08.731513700Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 13 00:05:09.516657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2540314046.mount: Deactivated successfully. Aug 13 00:05:10.793174 containerd[1483]: time="2025-08-13T00:05:10.792087620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:05:10.793174 containerd[1483]: time="2025-08-13T00:05:10.793112840Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=28799994" Aug 13 00:05:10.793768 containerd[1483]: time="2025-08-13T00:05:10.793661710Z" level=info msg="ImageCreate event name:\"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:05:10.796129 containerd[1483]: time="2025-08-13T00:05:10.796105460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:05:10.797082 containerd[1483]: time="2025-08-13T00:05:10.797052320Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"28796794\" in 2.0652814s" Aug 13 00:05:10.797125 containerd[1483]: time="2025-08-13T00:05:10.797089180Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\"" Aug 13 00:05:10.798254 containerd[1483]: time="2025-08-13T00:05:10.798235900Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 13 00:05:11.257314 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:05:11.267726 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:05:11.452769 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:05:11.464053 (kubelet)[1960]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:05:11.512408 kubelet[1960]: E0813 00:05:11.512304 1960 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:05:11.519976 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:05:11.520186 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:05:11.520691 systemd[1]: kubelet.service: Consumed 221ms CPU time, 109.3M memory peak. Aug 13 00:05:12.594168 containerd[1483]: time="2025-08-13T00:05:12.594081810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:05:12.596295 containerd[1483]: time="2025-08-13T00:05:12.596240700Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=24783636" Aug 13 00:05:12.597205 containerd[1483]: time="2025-08-13T00:05:12.597163320Z" level=info msg="ImageCreate event name:\"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:05:12.600483 containerd[1483]: time="2025-08-13T00:05:12.600045430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:05:12.601149 containerd[1483]: time="2025-08-13T00:05:12.601106910Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"26385470\" in 1.80284378s" Aug 13 00:05:12.601187 containerd[1483]: time="2025-08-13T00:05:12.601155830Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\"" Aug 13 00:05:12.602324 containerd[1483]: time="2025-08-13T00:05:12.602282930Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 13 00:05:14.102618 containerd[1483]: time="2025-08-13T00:05:14.102534490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:05:14.103831 containerd[1483]: time="2025-08-13T00:05:14.103423800Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=19176921" Aug 13 00:05:14.104279 containerd[1483]: time="2025-08-13T00:05:14.104246040Z" level=info msg="ImageCreate event name:\"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:05:14.107055 containerd[1483]: time="2025-08-13T00:05:14.107011750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:05:14.108297 containerd[1483]: time="2025-08-13T00:05:14.108106670Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"20778773\" in 1.50578975s" Aug 13 00:05:14.108297 containerd[1483]: time="2025-08-13T00:05:14.108141800Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\"" Aug 13 00:05:14.109971 containerd[1483]: time="2025-08-13T00:05:14.109946980Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 00:05:15.409500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4025751471.mount: Deactivated successfully. Aug 13 00:05:15.842686 containerd[1483]: time="2025-08-13T00:05:15.842114380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:05:15.843519 containerd[1483]: time="2025-08-13T00:05:15.843406940Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=30895380" Aug 13 00:05:15.844554 containerd[1483]: time="2025-08-13T00:05:15.844516990Z" level=info msg="ImageCreate event name:\"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:05:15.846472 containerd[1483]: time="2025-08-13T00:05:15.846420080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:05:15.847279 containerd[1483]: time="2025-08-13T00:05:15.847238000Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"30894399\" in 1.73725987s" Aug 13 00:05:15.847332 containerd[1483]: time="2025-08-13T00:05:15.847278380Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Aug 13 00:05:15.849526 containerd[1483]: time="2025-08-13T00:05:15.849475860Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:05:16.596937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3458712338.mount: Deactivated successfully. Aug 13 00:05:17.355059 containerd[1483]: time="2025-08-13T00:05:17.354962950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:05:17.356089 containerd[1483]: time="2025-08-13T00:05:17.356053770Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 00:05:17.356689 containerd[1483]: time="2025-08-13T00:05:17.356661270Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:05:17.359353 containerd[1483]: time="2025-08-13T00:05:17.358931370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:05:17.359943 containerd[1483]: time="2025-08-13T00:05:17.359916060Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.51040357s" Aug 13 00:05:17.359986 containerd[1483]: time="2025-08-13T00:05:17.359947630Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:05:17.364800 containerd[1483]: time="2025-08-13T00:05:17.364777170Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:05:18.026783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3303389190.mount: Deactivated successfully. Aug 13 00:05:18.030160 containerd[1483]: time="2025-08-13T00:05:18.030116140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:05:18.031231 containerd[1483]: time="2025-08-13T00:05:18.031184220Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 00:05:18.032731 containerd[1483]: time="2025-08-13T00:05:18.031332830Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:05:18.034432 containerd[1483]: time="2025-08-13T00:05:18.033472410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:05:18.034432 containerd[1483]: time="2025-08-13T00:05:18.034310770Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 669.50666ms" Aug 13 00:05:18.034432 containerd[1483]: time="2025-08-13T00:05:18.034333640Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:05:18.035080 containerd[1483]: time="2025-08-13T00:05:18.035045410Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 00:05:18.825914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3601894298.mount: Deactivated successfully. Aug 13 00:05:20.370032 containerd[1483]: time="2025-08-13T00:05:20.369959110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:05:20.371151 containerd[1483]: time="2025-08-13T00:05:20.370963960Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Aug 13 00:05:20.372084 containerd[1483]: time="2025-08-13T00:05:20.371619300Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:05:20.374411 containerd[1483]: time="2025-08-13T00:05:20.374347430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:05:20.377603 containerd[1483]: time="2025-08-13T00:05:20.375409710Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.34033145s" Aug 13 00:05:20.377603 containerd[1483]: time="2025-08-13T00:05:20.375448380Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 00:05:21.747643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:05:21.760241 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:05:21.934715 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:05:21.943049 (kubelet)[2119]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:05:21.986337 kubelet[2119]: E0813 00:05:21.986166 2119 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:05:21.996149 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:05:21.996777 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:05:21.997313 systemd[1]: kubelet.service: Consumed 186ms CPU time, 108.1M memory peak. Aug 13 00:05:22.021241 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:05:22.021599 systemd[1]: kubelet.service: Consumed 186ms CPU time, 108.1M memory peak. Aug 13 00:05:22.027757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:05:22.062805 systemd[1]: Reload requested from client PID 2134 ('systemctl') (unit session-7.scope)... Aug 13 00:05:22.062819 systemd[1]: Reloading... Aug 13 00:05:22.228606 zram_generator::config[2182]: No configuration found. Aug 13 00:05:22.345365 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:05:22.437448 systemd[1]: Reloading finished in 374 ms. Aug 13 00:05:22.484452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:05:22.495959 (kubelet)[2223]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:05:22.500419 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:05:22.501347 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:05:22.501675 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:05:22.501710 systemd[1]: kubelet.service: Consumed 152ms CPU time, 99.5M memory peak. Aug 13 00:05:22.507797 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:05:22.673953 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:05:22.681053 (kubelet)[2236]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:05:22.731986 kubelet[2236]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:05:22.731986 kubelet[2236]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:05:22.731986 kubelet[2236]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:05:22.732586 kubelet[2236]: I0813 00:05:22.732336 2236 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:05:22.988326 kubelet[2236]: I0813 00:05:22.987965 2236 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:05:22.988326 kubelet[2236]: I0813 00:05:22.988001 2236 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:05:22.988457 kubelet[2236]: I0813 00:05:22.988334 2236 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:05:23.017094 kubelet[2236]: I0813 00:05:23.017070 2236 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:05:23.018654 kubelet[2236]: E0813 00:05:23.018091 2236 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.234.216.155:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.234.216.155:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:05:23.024371 kubelet[2236]: E0813 00:05:23.024337 2236 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:05:23.024371 kubelet[2236]: I0813 00:05:23.024370 2236 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:05:23.029204 kubelet[2236]: I0813 00:05:23.028927 2236 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:05:23.030439 kubelet[2236]: I0813 00:05:23.030412 2236 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:05:23.030642 kubelet[2236]: I0813 00:05:23.030476 2236 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-216-155","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:05:23.030804 kubelet[2236]: I0813 00:05:23.030789 2236 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:05:23.030848 kubelet[2236]: I0813 00:05:23.030840 2236 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:05:23.030996 kubelet[2236]: I0813 00:05:23.030984 2236 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:05:23.035074 kubelet[2236]: I0813 00:05:23.035062 2236 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:05:23.035145 kubelet[2236]: I0813 00:05:23.035137 2236 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:05:23.035372 kubelet[2236]: I0813 00:05:23.035188 2236 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:05:23.035372 kubelet[2236]: I0813 00:05:23.035201 2236 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:05:23.040446 kubelet[2236]: W0813 00:05:23.039966 2236 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.234.216.155:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-216-155&limit=500&resourceVersion=0": dial tcp 172.234.216.155:6443: connect: connection refused Aug 13 00:05:23.040446 kubelet[2236]: E0813 00:05:23.040024 2236 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.234.216.155:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-216-155&limit=500&resourceVersion=0\": dial tcp 172.234.216.155:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:05:23.040446 kubelet[2236]: I0813 00:05:23.040122 2236 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 13 00:05:23.040645 kubelet[2236]: I0813 00:05:23.040613 2236 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:05:23.042406 kubelet[2236]: W0813 00:05:23.041472 2236 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:05:23.045434 kubelet[2236]: I0813 00:05:23.045025 2236 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:05:23.045434 kubelet[2236]: I0813 00:05:23.045084 2236 server.go:1287] "Started kubelet" Aug 13 00:05:23.047105 kubelet[2236]: W0813 00:05:23.047074 2236 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.234.216.155:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.234.216.155:6443: connect: connection refused Aug 13 00:05:23.047188 kubelet[2236]: E0813 00:05:23.047172 2236 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.234.216.155:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.216.155:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:05:23.047308 kubelet[2236]: I0813 00:05:23.047276 2236 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:05:23.052590 kubelet[2236]: I0813 00:05:23.052489 2236 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:05:23.053208 kubelet[2236]: I0813 00:05:23.053192 2236 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:05:23.055475 kubelet[2236]: I0813 00:05:23.055441 2236 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:05:23.058438 kubelet[2236]: E0813 00:05:23.056792 2236 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.234.216.155:6443/api/v1/namespaces/default/events\": dial tcp 172.234.216.155:6443: connect: connection refused" event="&Event{ObjectMeta:{172-234-216-155.185b2ac91c735f04 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-234-216-155,UID:172-234-216-155,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-234-216-155,},FirstTimestamp:2025-08-13 00:05:23.04505626 +0000 UTC m=+0.359756871,LastTimestamp:2025-08-13 00:05:23.04505626 +0000 UTC m=+0.359756871,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-234-216-155,}" Aug 13 00:05:23.058438 kubelet[2236]: I0813 00:05:23.058287 2236 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:05:23.059569 kubelet[2236]: I0813 00:05:23.059524 2236 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:05:23.062484 kubelet[2236]: E0813 00:05:23.062446 2236 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-216-155\" not found" Aug 13 00:05:23.062532 kubelet[2236]: I0813 00:05:23.062489 2236 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:05:23.062718 kubelet[2236]: I0813 00:05:23.062680 2236 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:05:23.062751 kubelet[2236]: I0813 00:05:23.062735 2236 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:05:23.063536 kubelet[2236]: W0813 00:05:23.063203 2236 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.234.216.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.234.216.155:6443: connect: connection refused Aug 13 00:05:23.063536 kubelet[2236]: E0813 00:05:23.063253 2236 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.234.216.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.216.155:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:05:23.063536 kubelet[2236]: E0813 00:05:23.063376 2236 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:05:23.063964 kubelet[2236]: I0813 00:05:23.063934 2236 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:05:23.064051 kubelet[2236]: I0813 00:05:23.064018 2236 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:05:23.065252 kubelet[2236]: I0813 00:05:23.065224 2236 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:05:23.077750 kubelet[2236]: I0813 00:05:23.077725 2236 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:05:23.078767 kubelet[2236]: I0813 00:05:23.078754 2236 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:05:23.079032 kubelet[2236]: I0813 00:05:23.078807 2236 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:05:23.079032 kubelet[2236]: I0813 00:05:23.078826 2236 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:05:23.079032 kubelet[2236]: I0813 00:05:23.078833 2236 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:05:23.079032 kubelet[2236]: E0813 00:05:23.078875 2236 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:05:23.086119 kubelet[2236]: E0813 00:05:23.086098 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.216.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-216-155?timeout=10s\": dial tcp 172.234.216.155:6443: connect: connection refused" interval="200ms" Aug 13 00:05:23.087843 kubelet[2236]: W0813 00:05:23.087813 2236 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.234.216.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.234.216.155:6443: connect: connection refused Aug 13 00:05:23.088606 kubelet[2236]: E0813 00:05:23.088102 2236 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.234.216.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.216.155:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:05:23.094919 kubelet[2236]: I0813 00:05:23.094896 2236 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:05:23.094919 kubelet[2236]: I0813 00:05:23.094913 2236 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:05:23.095004 kubelet[2236]: I0813 00:05:23.094931 2236 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:05:23.096422 kubelet[2236]: I0813 00:05:23.096393 2236 policy_none.go:49] "None policy: Start" Aug 13 00:05:23.096422 kubelet[2236]: I0813 00:05:23.096415 2236 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:05:23.096422 kubelet[2236]: I0813 00:05:23.096428 2236 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:05:23.103696 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 00:05:23.112965 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 00:05:23.116299 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 00:05:23.126496 kubelet[2236]: I0813 00:05:23.126460 2236 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:05:23.126734 kubelet[2236]: I0813 00:05:23.126703 2236 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:05:23.126777 kubelet[2236]: I0813 00:05:23.126728 2236 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:05:23.127113 kubelet[2236]: I0813 00:05:23.127000 2236 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:05:23.128742 kubelet[2236]: E0813 00:05:23.128687 2236 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:05:23.128742 kubelet[2236]: E0813 00:05:23.128729 2236 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-234-216-155\" not found" Aug 13 00:05:23.194083 systemd[1]: Created slice kubepods-burstable-pod420a4ba5817a3e8a257531e7cd202a1a.slice - libcontainer container kubepods-burstable-pod420a4ba5817a3e8a257531e7cd202a1a.slice. Aug 13 00:05:23.206972 kubelet[2236]: E0813 00:05:23.206931 2236 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-216-155\" not found" node="172-234-216-155" Aug 13 00:05:23.210111 systemd[1]: Created slice kubepods-burstable-pod1b0291905de3e4422b68e2a2125f4aa3.slice - libcontainer container kubepods-burstable-pod1b0291905de3e4422b68e2a2125f4aa3.slice. Aug 13 00:05:23.216055 kubelet[2236]: E0813 00:05:23.216030 2236 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-216-155\" not found" node="172-234-216-155" Aug 13 00:05:23.219586 systemd[1]: Created slice kubepods-burstable-pod8af98f31fe347305e955be7ba665868a.slice - libcontainer container kubepods-burstable-pod8af98f31fe347305e955be7ba665868a.slice. Aug 13 00:05:23.221620 kubelet[2236]: E0813 00:05:23.221587 2236 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-216-155\" not found" node="172-234-216-155" Aug 13 00:05:23.229461 kubelet[2236]: I0813 00:05:23.229377 2236 kubelet_node_status.go:75] "Attempting to register node" node="172-234-216-155" Aug 13 00:05:23.229958 kubelet[2236]: E0813 00:05:23.229912 2236 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.216.155:6443/api/v1/nodes\": dial tcp 172.234.216.155:6443: connect: connection refused" node="172-234-216-155" Aug 13 00:05:23.288015 kubelet[2236]: E0813 00:05:23.287932 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.216.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-216-155?timeout=10s\": dial tcp 172.234.216.155:6443: connect: connection refused" interval="400ms" Aug 13 00:05:23.364619 kubelet[2236]: I0813 00:05:23.364472 2236 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1b0291905de3e4422b68e2a2125f4aa3-kubeconfig\") pod \"kube-controller-manager-172-234-216-155\" (UID: \"1b0291905de3e4422b68e2a2125f4aa3\") " pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:05:23.364619 kubelet[2236]: I0813 00:05:23.364501 2236 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8af98f31fe347305e955be7ba665868a-kubeconfig\") pod \"kube-scheduler-172-234-216-155\" (UID: \"8af98f31fe347305e955be7ba665868a\") " pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:05:23.364619 kubelet[2236]: I0813 00:05:23.364522 2236 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1b0291905de3e4422b68e2a2125f4aa3-flexvolume-dir\") pod \"kube-controller-manager-172-234-216-155\" (UID: \"1b0291905de3e4422b68e2a2125f4aa3\") " pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:05:23.364619 kubelet[2236]: I0813 00:05:23.364538 2236 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b0291905de3e4422b68e2a2125f4aa3-k8s-certs\") pod \"kube-controller-manager-172-234-216-155\" (UID: \"1b0291905de3e4422b68e2a2125f4aa3\") " pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:05:23.364619 kubelet[2236]: I0813 00:05:23.364574 2236 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b0291905de3e4422b68e2a2125f4aa3-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-216-155\" (UID: \"1b0291905de3e4422b68e2a2125f4aa3\") " pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:05:23.364767 kubelet[2236]: I0813 00:05:23.364674 2236 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/420a4ba5817a3e8a257531e7cd202a1a-ca-certs\") pod \"kube-apiserver-172-234-216-155\" (UID: \"420a4ba5817a3e8a257531e7cd202a1a\") " pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:05:23.364977 kubelet[2236]: I0813 00:05:23.364765 2236 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/420a4ba5817a3e8a257531e7cd202a1a-k8s-certs\") pod \"kube-apiserver-172-234-216-155\" (UID: \"420a4ba5817a3e8a257531e7cd202a1a\") " pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:05:23.365003 kubelet[2236]: I0813 00:05:23.364988 2236 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/420a4ba5817a3e8a257531e7cd202a1a-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-216-155\" (UID: \"420a4ba5817a3e8a257531e7cd202a1a\") " pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:05:23.365089 kubelet[2236]: I0813 00:05:23.365024 2236 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b0291905de3e4422b68e2a2125f4aa3-ca-certs\") pod \"kube-controller-manager-172-234-216-155\" (UID: \"1b0291905de3e4422b68e2a2125f4aa3\") " pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:05:23.432706 kubelet[2236]: I0813 00:05:23.432647 2236 kubelet_node_status.go:75] "Attempting to register node" node="172-234-216-155" Aug 13 00:05:23.432992 kubelet[2236]: E0813 00:05:23.432968 2236 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.216.155:6443/api/v1/nodes\": dial tcp 172.234.216.155:6443: connect: connection refused" node="172-234-216-155" Aug 13 00:05:23.508292 kubelet[2236]: E0813 00:05:23.508230 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:23.509249 containerd[1483]: time="2025-08-13T00:05:23.509198610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-216-155,Uid:420a4ba5817a3e8a257531e7cd202a1a,Namespace:kube-system,Attempt:0,}" Aug 13 00:05:23.517388 kubelet[2236]: E0813 00:05:23.517366 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:23.517674 containerd[1483]: time="2025-08-13T00:05:23.517651670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-216-155,Uid:1b0291905de3e4422b68e2a2125f4aa3,Namespace:kube-system,Attempt:0,}" Aug 13 00:05:23.522271 kubelet[2236]: E0813 00:05:23.522240 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:23.522545 containerd[1483]: time="2025-08-13T00:05:23.522523090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-216-155,Uid:8af98f31fe347305e955be7ba665868a,Namespace:kube-system,Attempt:0,}" Aug 13 00:05:23.689339 kubelet[2236]: E0813 00:05:23.689268 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.216.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-216-155?timeout=10s\": dial tcp 172.234.216.155:6443: connect: connection refused" interval="800ms" Aug 13 00:05:23.834904 kubelet[2236]: I0813 00:05:23.834873 2236 kubelet_node_status.go:75] "Attempting to register node" node="172-234-216-155" Aug 13 00:05:23.835491 kubelet[2236]: E0813 00:05:23.835280 2236 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.216.155:6443/api/v1/nodes\": dial tcp 172.234.216.155:6443: connect: connection refused" node="172-234-216-155" Aug 13 00:05:23.948842 kubelet[2236]: W0813 00:05:23.948683 2236 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.234.216.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.234.216.155:6443: connect: connection refused Aug 13 00:05:23.948842 kubelet[2236]: E0813 00:05:23.948725 2236 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.234.216.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.216.155:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:05:24.199802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2093234265.mount: Deactivated successfully. Aug 13 00:05:24.205111 containerd[1483]: time="2025-08-13T00:05:24.204121680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:05:24.205656 containerd[1483]: time="2025-08-13T00:05:24.205630970Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:05:24.207977 containerd[1483]: time="2025-08-13T00:05:24.207939000Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:05:24.208614 containerd[1483]: time="2025-08-13T00:05:24.208183790Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 00:05:24.209084 containerd[1483]: time="2025-08-13T00:05:24.209055120Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:05:24.214653 containerd[1483]: time="2025-08-13T00:05:24.214596200Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:05:24.216721 containerd[1483]: time="2025-08-13T00:05:24.215700270Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:05:24.217255 kubelet[2236]: W0813 00:05:24.217183 2236 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.234.216.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.234.216.155:6443: connect: connection refused Aug 13 00:05:24.217255 kubelet[2236]: E0813 00:05:24.217227 2236 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.234.216.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.216.155:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:05:24.218064 containerd[1483]: time="2025-08-13T00:05:24.218042200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:05:24.218616 containerd[1483]: time="2025-08-13T00:05:24.218597600Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 709.31427ms" Aug 13 00:05:24.220703 containerd[1483]: time="2025-08-13T00:05:24.220677040Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 702.97366ms" Aug 13 00:05:24.221589 containerd[1483]: time="2025-08-13T00:05:24.221548350Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 698.95745ms" Aug 13 00:05:24.352585 containerd[1483]: time="2025-08-13T00:05:24.352433130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:05:24.352585 containerd[1483]: time="2025-08-13T00:05:24.352488570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:05:24.352585 containerd[1483]: time="2025-08-13T00:05:24.352503250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:05:24.354299 containerd[1483]: time="2025-08-13T00:05:24.354072420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:05:24.354299 containerd[1483]: time="2025-08-13T00:05:24.354102360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:05:24.354299 containerd[1483]: time="2025-08-13T00:05:24.354115080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:05:24.354299 containerd[1483]: time="2025-08-13T00:05:24.354171130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:05:24.356712 containerd[1483]: time="2025-08-13T00:05:24.350152930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:05:24.356712 containerd[1483]: time="2025-08-13T00:05:24.353551110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:05:24.356712 containerd[1483]: time="2025-08-13T00:05:24.353602790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:05:24.356712 containerd[1483]: time="2025-08-13T00:05:24.353668660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:05:24.368769 containerd[1483]: time="2025-08-13T00:05:24.368284840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:05:24.390742 systemd[1]: Started cri-containerd-f90a129d12293a7a2174f2889ba07ef86b7dcadc65317f82908a3d143105c9e5.scope - libcontainer container f90a129d12293a7a2174f2889ba07ef86b7dcadc65317f82908a3d143105c9e5. Aug 13 00:05:24.416149 kubelet[2236]: W0813 00:05:24.412459 2236 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.234.216.155:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-216-155&limit=500&resourceVersion=0": dial tcp 172.234.216.155:6443: connect: connection refused Aug 13 00:05:24.416149 kubelet[2236]: E0813 00:05:24.412526 2236 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.234.216.155:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-216-155&limit=500&resourceVersion=0\": dial tcp 172.234.216.155:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:05:24.432689 systemd[1]: Started cri-containerd-243a670ebc0b7e4136326d6a7df0cfd31d6138ee4df55bb70660923d4f77dfea.scope - libcontainer container 243a670ebc0b7e4136326d6a7df0cfd31d6138ee4df55bb70660923d4f77dfea. Aug 13 00:05:24.436579 systemd[1]: Started cri-containerd-6eb64b67ffa8dc7a536ec5da134748a24750076ab13a1fc66302b4f79fd87f50.scope - libcontainer container 6eb64b67ffa8dc7a536ec5da134748a24750076ab13a1fc66302b4f79fd87f50. Aug 13 00:05:24.475089 containerd[1483]: time="2025-08-13T00:05:24.474997030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-216-155,Uid:1b0291905de3e4422b68e2a2125f4aa3,Namespace:kube-system,Attempt:0,} returns sandbox id \"f90a129d12293a7a2174f2889ba07ef86b7dcadc65317f82908a3d143105c9e5\"" Aug 13 00:05:24.478889 kubelet[2236]: E0813 00:05:24.478867 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:24.484753 containerd[1483]: time="2025-08-13T00:05:24.484661690Z" level=info msg="CreateContainer within sandbox \"f90a129d12293a7a2174f2889ba07ef86b7dcadc65317f82908a3d143105c9e5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:05:24.490538 kubelet[2236]: E0813 00:05:24.490469 2236 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.216.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-216-155?timeout=10s\": dial tcp 172.234.216.155:6443: connect: connection refused" interval="1.6s" Aug 13 00:05:24.494699 containerd[1483]: time="2025-08-13T00:05:24.494452090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-216-155,Uid:8af98f31fe347305e955be7ba665868a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6eb64b67ffa8dc7a536ec5da134748a24750076ab13a1fc66302b4f79fd87f50\"" Aug 13 00:05:24.495510 kubelet[2236]: E0813 00:05:24.495469 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:24.508691 containerd[1483]: time="2025-08-13T00:05:24.508653860Z" level=info msg="CreateContainer within sandbox \"6eb64b67ffa8dc7a536ec5da134748a24750076ab13a1fc66302b4f79fd87f50\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:05:24.511739 containerd[1483]: time="2025-08-13T00:05:24.511717140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-216-155,Uid:420a4ba5817a3e8a257531e7cd202a1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"243a670ebc0b7e4136326d6a7df0cfd31d6138ee4df55bb70660923d4f77dfea\"" Aug 13 00:05:24.513120 kubelet[2236]: E0813 00:05:24.513097 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:24.514524 containerd[1483]: time="2025-08-13T00:05:24.514495700Z" level=info msg="CreateContainer within sandbox \"243a670ebc0b7e4136326d6a7df0cfd31d6138ee4df55bb70660923d4f77dfea\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:05:24.516100 containerd[1483]: time="2025-08-13T00:05:24.516015490Z" level=info msg="CreateContainer within sandbox \"f90a129d12293a7a2174f2889ba07ef86b7dcadc65317f82908a3d143105c9e5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"de8b56b0b9e11c37e6f344792e29c2f6c68ca1bcd0458bee46deef395f92b122\"" Aug 13 00:05:24.517152 containerd[1483]: time="2025-08-13T00:05:24.517134080Z" level=info msg="StartContainer for \"de8b56b0b9e11c37e6f344792e29c2f6c68ca1bcd0458bee46deef395f92b122\"" Aug 13 00:05:24.523243 containerd[1483]: time="2025-08-13T00:05:24.523217010Z" level=info msg="CreateContainer within sandbox \"6eb64b67ffa8dc7a536ec5da134748a24750076ab13a1fc66302b4f79fd87f50\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ed4348ba4a73b65765feca154218d479c9cc9e94cd7aa41daf0086276c45a7f2\"" Aug 13 00:05:24.523690 containerd[1483]: time="2025-08-13T00:05:24.523672210Z" level=info msg="StartContainer for \"ed4348ba4a73b65765feca154218d479c9cc9e94cd7aa41daf0086276c45a7f2\"" Aug 13 00:05:24.530155 containerd[1483]: time="2025-08-13T00:05:24.530135010Z" level=info msg="CreateContainer within sandbox \"243a670ebc0b7e4136326d6a7df0cfd31d6138ee4df55bb70660923d4f77dfea\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fa16bb65343fcb3d084871a61a84defc74054f00084dfed07fface28048500a9\"" Aug 13 00:05:24.530823 containerd[1483]: time="2025-08-13T00:05:24.530748730Z" level=info msg="StartContainer for \"fa16bb65343fcb3d084871a61a84defc74054f00084dfed07fface28048500a9\"" Aug 13 00:05:24.566684 systemd[1]: Started cri-containerd-de8b56b0b9e11c37e6f344792e29c2f6c68ca1bcd0458bee46deef395f92b122.scope - libcontainer container de8b56b0b9e11c37e6f344792e29c2f6c68ca1bcd0458bee46deef395f92b122. Aug 13 00:05:24.569138 systemd[1]: Started cri-containerd-fa16bb65343fcb3d084871a61a84defc74054f00084dfed07fface28048500a9.scope - libcontainer container fa16bb65343fcb3d084871a61a84defc74054f00084dfed07fface28048500a9. Aug 13 00:05:24.574018 systemd[1]: Started cri-containerd-ed4348ba4a73b65765feca154218d479c9cc9e94cd7aa41daf0086276c45a7f2.scope - libcontainer container ed4348ba4a73b65765feca154218d479c9cc9e94cd7aa41daf0086276c45a7f2. Aug 13 00:05:24.601184 kubelet[2236]: W0813 00:05:24.600746 2236 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.234.216.155:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.234.216.155:6443: connect: connection refused Aug 13 00:05:24.601184 kubelet[2236]: E0813 00:05:24.601135 2236 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.234.216.155:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.216.155:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:05:24.631684 containerd[1483]: time="2025-08-13T00:05:24.631209330Z" level=info msg="StartContainer for \"ed4348ba4a73b65765feca154218d479c9cc9e94cd7aa41daf0086276c45a7f2\" returns successfully" Aug 13 00:05:24.641289 kubelet[2236]: I0813 00:05:24.640955 2236 kubelet_node_status.go:75] "Attempting to register node" node="172-234-216-155" Aug 13 00:05:24.641369 containerd[1483]: time="2025-08-13T00:05:24.641048200Z" level=info msg="StartContainer for \"fa16bb65343fcb3d084871a61a84defc74054f00084dfed07fface28048500a9\" returns successfully" Aug 13 00:05:24.642035 kubelet[2236]: E0813 00:05:24.642011 2236 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.216.155:6443/api/v1/nodes\": dial tcp 172.234.216.155:6443: connect: connection refused" node="172-234-216-155" Aug 13 00:05:24.656825 containerd[1483]: time="2025-08-13T00:05:24.656744010Z" level=info msg="StartContainer for \"de8b56b0b9e11c37e6f344792e29c2f6c68ca1bcd0458bee46deef395f92b122\" returns successfully" Aug 13 00:05:25.105859 kubelet[2236]: E0813 00:05:25.105543 2236 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-216-155\" not found" node="172-234-216-155" Aug 13 00:05:25.108479 kubelet[2236]: E0813 00:05:25.108448 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:25.110574 kubelet[2236]: E0813 00:05:25.110523 2236 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-216-155\" not found" node="172-234-216-155" Aug 13 00:05:25.111656 kubelet[2236]: E0813 00:05:25.111631 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:25.112083 kubelet[2236]: E0813 00:05:25.112056 2236 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-216-155\" not found" node="172-234-216-155" Aug 13 00:05:25.112170 kubelet[2236]: E0813 00:05:25.112142 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:26.099825 kubelet[2236]: E0813 00:05:26.099763 2236 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-234-216-155\" not found" node="172-234-216-155" Aug 13 00:05:26.114676 kubelet[2236]: E0813 00:05:26.114631 2236 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-216-155\" not found" node="172-234-216-155" Aug 13 00:05:26.115049 kubelet[2236]: E0813 00:05:26.114784 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:26.115224 kubelet[2236]: E0813 00:05:26.115201 2236 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-216-155\" not found" node="172-234-216-155" Aug 13 00:05:26.115311 kubelet[2236]: E0813 00:05:26.115292 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:26.245155 kubelet[2236]: I0813 00:05:26.245130 2236 kubelet_node_status.go:75] "Attempting to register node" node="172-234-216-155" Aug 13 00:05:26.257230 kubelet[2236]: I0813 00:05:26.255933 2236 kubelet_node_status.go:78] "Successfully registered node" node="172-234-216-155" Aug 13 00:05:26.257230 kubelet[2236]: E0813 00:05:26.256012 2236 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-234-216-155\": node \"172-234-216-155\" not found" Aug 13 00:05:26.266878 kubelet[2236]: E0813 00:05:26.266848 2236 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-216-155\" not found" Aug 13 00:05:26.367504 kubelet[2236]: E0813 00:05:26.367352 2236 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-216-155\" not found" Aug 13 00:05:26.467939 kubelet[2236]: E0813 00:05:26.467913 2236 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-216-155\" not found" Aug 13 00:05:26.568220 kubelet[2236]: E0813 00:05:26.568176 2236 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-216-155\" not found" Aug 13 00:05:26.668863 kubelet[2236]: E0813 00:05:26.668761 2236 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-216-155\" not found" Aug 13 00:05:26.768307 kubelet[2236]: I0813 00:05:26.768273 2236 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:05:26.774239 kubelet[2236]: E0813 00:05:26.774166 2236 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-216-155\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:05:26.774239 kubelet[2236]: I0813 00:05:26.774189 2236 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:05:26.776384 kubelet[2236]: E0813 00:05:26.775367 2236 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-234-216-155\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:05:26.776384 kubelet[2236]: I0813 00:05:26.775380 2236 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:05:26.776384 kubelet[2236]: E0813 00:05:26.776248 2236 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-216-155\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:05:27.048997 kubelet[2236]: I0813 00:05:27.048951 2236 apiserver.go:52] "Watching apiserver" Aug 13 00:05:27.063146 kubelet[2236]: I0813 00:05:27.063102 2236 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:05:27.889801 kubelet[2236]: I0813 00:05:27.889762 2236 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:05:27.894019 kubelet[2236]: E0813 00:05:27.893994 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:28.119587 kubelet[2236]: E0813 00:05:28.117526 2236 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:28.142873 systemd[1]: Reload requested from client PID 2510 ('systemctl') (unit session-7.scope)... Aug 13 00:05:28.142900 systemd[1]: Reloading... Aug 13 00:05:28.272617 zram_generator::config[2561]: No configuration found. Aug 13 00:05:28.386948 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:05:28.494681 systemd[1]: Reloading finished in 351 ms. Aug 13 00:05:28.523761 kubelet[2236]: I0813 00:05:28.523665 2236 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:05:28.523783 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:05:28.544459 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:05:28.544963 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:05:28.545034 systemd[1]: kubelet.service: Consumed 802ms CPU time, 133.6M memory peak. Aug 13 00:05:28.551837 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:05:28.738781 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:05:28.743327 (kubelet)[2606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:05:28.788733 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 00:05:28.791463 kubelet[2606]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:05:28.791463 kubelet[2606]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:05:28.791463 kubelet[2606]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:05:28.792184 kubelet[2606]: I0813 00:05:28.791550 2606 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:05:28.799592 kubelet[2606]: I0813 00:05:28.799540 2606 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:05:28.799592 kubelet[2606]: I0813 00:05:28.799581 2606 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:05:28.799797 kubelet[2606]: I0813 00:05:28.799773 2606 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:05:28.800818 kubelet[2606]: I0813 00:05:28.800791 2606 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:05:28.802550 kubelet[2606]: I0813 00:05:28.802516 2606 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:05:28.805896 kubelet[2606]: E0813 00:05:28.805870 2606 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:05:28.805896 kubelet[2606]: I0813 00:05:28.805894 2606 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:05:28.813682 kubelet[2606]: I0813 00:05:28.812689 2606 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:05:28.813682 kubelet[2606]: I0813 00:05:28.812924 2606 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:05:28.813682 kubelet[2606]: I0813 00:05:28.812960 2606 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-216-155","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:05:28.813682 kubelet[2606]: I0813 00:05:28.813210 2606 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:05:28.813983 kubelet[2606]: I0813 00:05:28.813225 2606 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:05:28.813983 kubelet[2606]: I0813 00:05:28.813277 2606 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:05:28.813983 kubelet[2606]: I0813 00:05:28.813429 2606 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:05:28.813983 kubelet[2606]: I0813 00:05:28.813454 2606 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:05:28.813983 kubelet[2606]: I0813 00:05:28.813473 2606 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:05:28.813983 kubelet[2606]: I0813 00:05:28.813484 2606 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:05:28.820949 kubelet[2606]: I0813 00:05:28.818635 2606 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 13 00:05:28.820949 kubelet[2606]: I0813 00:05:28.819005 2606 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:05:28.821289 kubelet[2606]: I0813 00:05:28.821276 2606 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:05:28.821365 kubelet[2606]: I0813 00:05:28.821355 2606 server.go:1287] "Started kubelet" Aug 13 00:05:28.825072 kubelet[2606]: I0813 00:05:28.825017 2606 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:05:28.825685 kubelet[2606]: I0813 00:05:28.825658 2606 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:05:28.825726 kubelet[2606]: I0813 00:05:28.825712 2606 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:05:28.825886 kubelet[2606]: I0813 00:05:28.825872 2606 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:05:28.826582 kubelet[2606]: I0813 00:05:28.826461 2606 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:05:28.833262 kubelet[2606]: I0813 00:05:28.833132 2606 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:05:28.835534 kubelet[2606]: I0813 00:05:28.835281 2606 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:05:28.839232 kubelet[2606]: I0813 00:05:28.838510 2606 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:05:28.839232 kubelet[2606]: I0813 00:05:28.838969 2606 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:05:28.839468 kubelet[2606]: I0813 00:05:28.839400 2606 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:05:28.839835 kubelet[2606]: I0813 00:05:28.839690 2606 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:05:28.841675 kubelet[2606]: E0813 00:05:28.841657 2606 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:05:28.843819 kubelet[2606]: I0813 00:05:28.842479 2606 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:05:28.846238 kubelet[2606]: I0813 00:05:28.846198 2606 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:05:28.846238 kubelet[2606]: I0813 00:05:28.846228 2606 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:05:28.846550 kubelet[2606]: I0813 00:05:28.846246 2606 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:05:28.846550 kubelet[2606]: I0813 00:05:28.846254 2606 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:05:28.846550 kubelet[2606]: E0813 00:05:28.846298 2606 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:05:28.851606 kubelet[2606]: I0813 00:05:28.850020 2606 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:05:28.902887 kubelet[2606]: I0813 00:05:28.902864 2606 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:05:28.903039 kubelet[2606]: I0813 00:05:28.903025 2606 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:05:28.903115 kubelet[2606]: I0813 00:05:28.903091 2606 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:05:28.903296 kubelet[2606]: I0813 00:05:28.903281 2606 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:05:28.903356 kubelet[2606]: I0813 00:05:28.903336 2606 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:05:28.903409 kubelet[2606]: I0813 00:05:28.903399 2606 policy_none.go:49] "None policy: Start" Aug 13 00:05:28.903456 kubelet[2606]: I0813 00:05:28.903448 2606 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:05:28.903496 kubelet[2606]: I0813 00:05:28.903488 2606 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:05:28.903660 kubelet[2606]: I0813 00:05:28.903647 2606 state_mem.go:75] "Updated machine memory state" Aug 13 00:05:28.908658 kubelet[2606]: I0813 00:05:28.908645 2606 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:05:28.909028 kubelet[2606]: I0813 00:05:28.909017 2606 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:05:28.910588 kubelet[2606]: I0813 00:05:28.910512 2606 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:05:28.911810 kubelet[2606]: I0813 00:05:28.911782 2606 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:05:28.913346 kubelet[2606]: E0813 00:05:28.913330 2606 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:05:28.947508 kubelet[2606]: I0813 00:05:28.947475 2606 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:05:28.947922 kubelet[2606]: I0813 00:05:28.947723 2606 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:05:28.948699 kubelet[2606]: I0813 00:05:28.947797 2606 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:05:28.953072 kubelet[2606]: E0813 00:05:28.953009 2606 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-216-155\" already exists" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:05:29.023655 kubelet[2606]: I0813 00:05:29.023607 2606 kubelet_node_status.go:75] "Attempting to register node" node="172-234-216-155" Aug 13 00:05:29.029413 kubelet[2606]: I0813 00:05:29.029387 2606 kubelet_node_status.go:124] "Node was previously registered" node="172-234-216-155" Aug 13 00:05:29.030410 kubelet[2606]: I0813 00:05:29.029645 2606 kubelet_node_status.go:78] "Successfully registered node" node="172-234-216-155" Aug 13 00:05:29.041720 kubelet[2606]: I0813 00:05:29.040797 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b0291905de3e4422b68e2a2125f4aa3-ca-certs\") pod \"kube-controller-manager-172-234-216-155\" (UID: \"1b0291905de3e4422b68e2a2125f4aa3\") " pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:05:29.041720 kubelet[2606]: I0813 00:05:29.040827 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1b0291905de3e4422b68e2a2125f4aa3-flexvolume-dir\") pod \"kube-controller-manager-172-234-216-155\" (UID: \"1b0291905de3e4422b68e2a2125f4aa3\") " pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:05:29.041720 kubelet[2606]: I0813 00:05:29.040844 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1b0291905de3e4422b68e2a2125f4aa3-kubeconfig\") pod \"kube-controller-manager-172-234-216-155\" (UID: \"1b0291905de3e4422b68e2a2125f4aa3\") " pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:05:29.041720 kubelet[2606]: I0813 00:05:29.040859 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b0291905de3e4422b68e2a2125f4aa3-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-216-155\" (UID: \"1b0291905de3e4422b68e2a2125f4aa3\") " pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:05:29.041720 kubelet[2606]: I0813 00:05:29.040877 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8af98f31fe347305e955be7ba665868a-kubeconfig\") pod \"kube-scheduler-172-234-216-155\" (UID: \"8af98f31fe347305e955be7ba665868a\") " pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:05:29.041904 kubelet[2606]: I0813 00:05:29.041224 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/420a4ba5817a3e8a257531e7cd202a1a-k8s-certs\") pod \"kube-apiserver-172-234-216-155\" (UID: \"420a4ba5817a3e8a257531e7cd202a1a\") " pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:05:29.041904 kubelet[2606]: I0813 00:05:29.041240 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b0291905de3e4422b68e2a2125f4aa3-k8s-certs\") pod \"kube-controller-manager-172-234-216-155\" (UID: \"1b0291905de3e4422b68e2a2125f4aa3\") " pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:05:29.041904 kubelet[2606]: I0813 00:05:29.041255 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/420a4ba5817a3e8a257531e7cd202a1a-ca-certs\") pod \"kube-apiserver-172-234-216-155\" (UID: \"420a4ba5817a3e8a257531e7cd202a1a\") " pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:05:29.041904 kubelet[2606]: I0813 00:05:29.041269 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/420a4ba5817a3e8a257531e7cd202a1a-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-216-155\" (UID: \"420a4ba5817a3e8a257531e7cd202a1a\") " pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:05:29.132248 sudo[2643]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 00:05:29.132666 sudo[2643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 00:05:29.253157 kubelet[2606]: E0813 00:05:29.252221 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:29.253401 kubelet[2606]: E0813 00:05:29.253385 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:29.253676 kubelet[2606]: E0813 00:05:29.253587 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:29.599538 sudo[2643]: pam_unix(sudo:session): session closed for user root Aug 13 00:05:29.815718 kubelet[2606]: I0813 00:05:29.815060 2606 apiserver.go:52] "Watching apiserver" Aug 13 00:05:29.838939 kubelet[2606]: I0813 00:05:29.838901 2606 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:05:29.880858 kubelet[2606]: E0813 00:05:29.878781 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:29.881071 kubelet[2606]: I0813 00:05:29.881057 2606 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:05:29.882806 kubelet[2606]: I0813 00:05:29.881290 2606 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:05:29.897611 kubelet[2606]: E0813 00:05:29.897437 2606 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-216-155\" already exists" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:05:29.897611 kubelet[2606]: E0813 00:05:29.897550 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:29.911529 kubelet[2606]: E0813 00:05:29.911477 2606 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-216-155\" already exists" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:05:29.911890 kubelet[2606]: E0813 00:05:29.911843 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:29.975156 kubelet[2606]: I0813 00:05:29.974965 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-234-216-155" podStartSLOduration=1.97494678 podStartE2EDuration="1.97494678s" podCreationTimestamp="2025-08-13 00:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:05:29.96510641 +0000 UTC m=+1.217300341" watchObservedRunningTime="2025-08-13 00:05:29.97494678 +0000 UTC m=+1.227140711" Aug 13 00:05:29.983568 kubelet[2606]: I0813 00:05:29.983526 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-234-216-155" podStartSLOduration=1.98351279 podStartE2EDuration="1.98351279s" podCreationTimestamp="2025-08-13 00:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:05:29.98299582 +0000 UTC m=+1.235189751" watchObservedRunningTime="2025-08-13 00:05:29.98351279 +0000 UTC m=+1.235706721" Aug 13 00:05:29.983718 kubelet[2606]: I0813 00:05:29.983623 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-234-216-155" podStartSLOduration=2.98361941 podStartE2EDuration="2.98361941s" podCreationTimestamp="2025-08-13 00:05:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:05:29.97587107 +0000 UTC m=+1.228065001" watchObservedRunningTime="2025-08-13 00:05:29.98361941 +0000 UTC m=+1.235813351" Aug 13 00:05:30.863579 sudo[1695]: pam_unix(sudo:session): session closed for user root Aug 13 00:05:30.880486 kubelet[2606]: E0813 00:05:30.880403 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:30.880486 kubelet[2606]: E0813 00:05:30.880435 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:30.913938 sshd[1694]: Connection closed by 139.178.89.65 port 37554 Aug 13 00:05:30.914349 sshd-session[1692]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:30.917792 systemd[1]: sshd@6-172.234.216.155:22-139.178.89.65:37554.service: Deactivated successfully. Aug 13 00:05:30.920413 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:05:30.920852 systemd[1]: session-7.scope: Consumed 3.549s CPU time, 262.9M memory peak. Aug 13 00:05:30.923166 systemd-logind[1455]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:05:30.924076 systemd-logind[1455]: Removed session 7. Aug 13 00:05:33.262397 kubelet[2606]: I0813 00:05:33.262360 2606 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:05:33.263095 containerd[1483]: time="2025-08-13T00:05:33.262973380Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:05:33.263398 kubelet[2606]: I0813 00:05:33.263115 2606 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:05:33.883584 kubelet[2606]: I0813 00:05:33.882480 2606 status_manager.go:890] "Failed to get status for pod" podUID="a55ef678-4cfd-4ae6-9805-0df4627ac65c" pod="kube-system/cilium-gwhs6" err="pods \"cilium-gwhs6\" is forbidden: User \"system:node:172-234-216-155\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-234-216-155' and this object" Aug 13 00:05:33.894531 systemd[1]: Created slice kubepods-burstable-poda55ef678_4cfd_4ae6_9805_0df4627ac65c.slice - libcontainer container kubepods-burstable-poda55ef678_4cfd_4ae6_9805_0df4627ac65c.slice. Aug 13 00:05:33.904489 systemd[1]: Created slice kubepods-besteffort-podf76c78fc_bad0_4732_90f1_48a54552f45a.slice - libcontainer container kubepods-besteffort-podf76c78fc_bad0_4732_90f1_48a54552f45a.slice. Aug 13 00:05:33.973087 kubelet[2606]: I0813 00:05:33.973020 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-bpf-maps\") pod \"cilium-gwhs6\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " pod="kube-system/cilium-gwhs6" Aug 13 00:05:33.973087 kubelet[2606]: I0813 00:05:33.973058 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-cilium-cgroup\") pod \"cilium-gwhs6\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " pod="kube-system/cilium-gwhs6" Aug 13 00:05:33.973087 kubelet[2606]: I0813 00:05:33.973077 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-host-proc-sys-net\") pod \"cilium-gwhs6\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " pod="kube-system/cilium-gwhs6" Aug 13 00:05:33.973087 kubelet[2606]: I0813 00:05:33.973096 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f76c78fc-bad0-4732-90f1-48a54552f45a-lib-modules\") pod \"kube-proxy-cph2s\" (UID: \"f76c78fc-bad0-4732-90f1-48a54552f45a\") " pod="kube-system/kube-proxy-cph2s" Aug 13 00:05:33.973087 kubelet[2606]: I0813 00:05:33.973113 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-lib-modules\") pod \"cilium-gwhs6\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " pod="kube-system/cilium-gwhs6" Aug 13 00:05:33.973446 kubelet[2606]: I0813 00:05:33.973132 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-cilium-run\") pod \"cilium-gwhs6\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " pod="kube-system/cilium-gwhs6" Aug 13 00:05:33.973446 kubelet[2606]: I0813 00:05:33.973146 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-etc-cni-netd\") pod \"cilium-gwhs6\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " pod="kube-system/cilium-gwhs6" Aug 13 00:05:33.973446 kubelet[2606]: I0813 00:05:33.973161 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a55ef678-4cfd-4ae6-9805-0df4627ac65c-cilium-config-path\") pod \"cilium-gwhs6\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " pod="kube-system/cilium-gwhs6" Aug 13 00:05:33.973446 kubelet[2606]: I0813 00:05:33.973175 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-hostproc\") pod \"cilium-gwhs6\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " pod="kube-system/cilium-gwhs6" Aug 13 00:05:33.973446 kubelet[2606]: I0813 00:05:33.973196 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-xtables-lock\") pod \"cilium-gwhs6\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " pod="kube-system/cilium-gwhs6" Aug 13 00:05:33.973446 kubelet[2606]: I0813 00:05:33.973212 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-host-proc-sys-kernel\") pod \"cilium-gwhs6\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " pod="kube-system/cilium-gwhs6" Aug 13 00:05:33.973622 kubelet[2606]: I0813 00:05:33.973229 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f76c78fc-bad0-4732-90f1-48a54552f45a-kube-proxy\") pod \"kube-proxy-cph2s\" (UID: \"f76c78fc-bad0-4732-90f1-48a54552f45a\") " pod="kube-system/kube-proxy-cph2s" Aug 13 00:05:33.973622 kubelet[2606]: I0813 00:05:33.973251 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f76c78fc-bad0-4732-90f1-48a54552f45a-xtables-lock\") pod \"kube-proxy-cph2s\" (UID: \"f76c78fc-bad0-4732-90f1-48a54552f45a\") " pod="kube-system/kube-proxy-cph2s" Aug 13 00:05:33.973622 kubelet[2606]: I0813 00:05:33.973265 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ss6s\" (UniqueName: \"kubernetes.io/projected/f76c78fc-bad0-4732-90f1-48a54552f45a-kube-api-access-6ss6s\") pod \"kube-proxy-cph2s\" (UID: \"f76c78fc-bad0-4732-90f1-48a54552f45a\") " pod="kube-system/kube-proxy-cph2s" Aug 13 00:05:33.973622 kubelet[2606]: I0813 00:05:33.973281 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-cni-path\") pod \"cilium-gwhs6\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " pod="kube-system/cilium-gwhs6" Aug 13 00:05:33.973622 kubelet[2606]: I0813 00:05:33.973313 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2hvs\" (UniqueName: \"kubernetes.io/projected/a55ef678-4cfd-4ae6-9805-0df4627ac65c-kube-api-access-h2hvs\") pod \"cilium-gwhs6\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " pod="kube-system/cilium-gwhs6" Aug 13 00:05:33.973748 kubelet[2606]: I0813 00:05:33.973327 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a55ef678-4cfd-4ae6-9805-0df4627ac65c-clustermesh-secrets\") pod \"cilium-gwhs6\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " pod="kube-system/cilium-gwhs6" Aug 13 00:05:33.973748 kubelet[2606]: I0813 00:05:33.973344 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a55ef678-4cfd-4ae6-9805-0df4627ac65c-hubble-tls\") pod \"cilium-gwhs6\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " pod="kube-system/cilium-gwhs6" Aug 13 00:05:34.086739 kubelet[2606]: E0813 00:05:34.086263 2606 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 00:05:34.086739 kubelet[2606]: E0813 00:05:34.086293 2606 projected.go:194] Error preparing data for projected volume kube-api-access-6ss6s for pod kube-system/kube-proxy-cph2s: configmap "kube-root-ca.crt" not found Aug 13 00:05:34.086739 kubelet[2606]: E0813 00:05:34.086343 2606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f76c78fc-bad0-4732-90f1-48a54552f45a-kube-api-access-6ss6s podName:f76c78fc-bad0-4732-90f1-48a54552f45a nodeName:}" failed. No retries permitted until 2025-08-13 00:05:34.58632316 +0000 UTC m=+5.838517091 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6ss6s" (UniqueName: "kubernetes.io/projected/f76c78fc-bad0-4732-90f1-48a54552f45a-kube-api-access-6ss6s") pod "kube-proxy-cph2s" (UID: "f76c78fc-bad0-4732-90f1-48a54552f45a") : configmap "kube-root-ca.crt" not found Aug 13 00:05:34.088193 kubelet[2606]: E0813 00:05:34.088171 2606 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 00:05:34.088193 kubelet[2606]: E0813 00:05:34.088193 2606 projected.go:194] Error preparing data for projected volume kube-api-access-h2hvs for pod kube-system/cilium-gwhs6: configmap "kube-root-ca.crt" not found Aug 13 00:05:34.088295 kubelet[2606]: E0813 00:05:34.088232 2606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a55ef678-4cfd-4ae6-9805-0df4627ac65c-kube-api-access-h2hvs podName:a55ef678-4cfd-4ae6-9805-0df4627ac65c nodeName:}" failed. No retries permitted until 2025-08-13 00:05:34.58821984 +0000 UTC m=+5.840413771 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-h2hvs" (UniqueName: "kubernetes.io/projected/a55ef678-4cfd-4ae6-9805-0df4627ac65c-kube-api-access-h2hvs") pod "cilium-gwhs6" (UID: "a55ef678-4cfd-4ae6-9805-0df4627ac65c") : configmap "kube-root-ca.crt" not found Aug 13 00:05:34.320404 kubelet[2606]: I0813 00:05:34.319869 2606 status_manager.go:890] "Failed to get status for pod" podUID="879e972d-983e-4dc8-98f9-bf6d900db40e" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" err="pods \"cilium-operator-6c4d7847fc-6n7sw\" is forbidden: User \"system:node:172-234-216-155\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-234-216-155' and this object" Aug 13 00:05:34.322421 systemd[1]: Created slice kubepods-besteffort-pod879e972d_983e_4dc8_98f9_bf6d900db40e.slice - libcontainer container kubepods-besteffort-pod879e972d_983e_4dc8_98f9_bf6d900db40e.slice. Aug 13 00:05:34.376433 kubelet[2606]: I0813 00:05:34.376356 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sghg8\" (UniqueName: \"kubernetes.io/projected/879e972d-983e-4dc8-98f9-bf6d900db40e-kube-api-access-sghg8\") pod \"cilium-operator-6c4d7847fc-6n7sw\" (UID: \"879e972d-983e-4dc8-98f9-bf6d900db40e\") " pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:05:34.376433 kubelet[2606]: I0813 00:05:34.376417 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/879e972d-983e-4dc8-98f9-bf6d900db40e-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-6n7sw\" (UID: \"879e972d-983e-4dc8-98f9-bf6d900db40e\") " pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:05:34.627606 kubelet[2606]: E0813 00:05:34.627447 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:34.628100 containerd[1483]: time="2025-08-13T00:05:34.628034150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6n7sw,Uid:879e972d-983e-4dc8-98f9-bf6d900db40e,Namespace:kube-system,Attempt:0,}" Aug 13 00:05:34.659195 containerd[1483]: time="2025-08-13T00:05:34.659057960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:05:34.659195 containerd[1483]: time="2025-08-13T00:05:34.659129740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:05:34.659195 containerd[1483]: time="2025-08-13T00:05:34.659147600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:05:34.660180 containerd[1483]: time="2025-08-13T00:05:34.659753990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:05:34.686764 systemd[1]: Started cri-containerd-1f293f7de4a2748a415fabca4226d2ae72f132b79501af86f6893591e2d92c48.scope - libcontainer container 1f293f7de4a2748a415fabca4226d2ae72f132b79501af86f6893591e2d92c48. Aug 13 00:05:34.731762 containerd[1483]: time="2025-08-13T00:05:34.731698150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6n7sw,Uid:879e972d-983e-4dc8-98f9-bf6d900db40e,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f293f7de4a2748a415fabca4226d2ae72f132b79501af86f6893591e2d92c48\"" Aug 13 00:05:34.733030 kubelet[2606]: E0813 00:05:34.732980 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:34.735371 containerd[1483]: time="2025-08-13T00:05:34.735339410Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:05:34.802343 kubelet[2606]: E0813 00:05:34.802114 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:34.803161 containerd[1483]: time="2025-08-13T00:05:34.802598050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gwhs6,Uid:a55ef678-4cfd-4ae6-9805-0df4627ac65c,Namespace:kube-system,Attempt:0,}" Aug 13 00:05:34.814130 kubelet[2606]: E0813 00:05:34.814085 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:34.817640 containerd[1483]: time="2025-08-13T00:05:34.817595950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cph2s,Uid:f76c78fc-bad0-4732-90f1-48a54552f45a,Namespace:kube-system,Attempt:0,}" Aug 13 00:05:34.843435 containerd[1483]: time="2025-08-13T00:05:34.843123080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:05:34.843435 containerd[1483]: time="2025-08-13T00:05:34.843182490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:05:34.843435 containerd[1483]: time="2025-08-13T00:05:34.843195880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:05:34.843435 containerd[1483]: time="2025-08-13T00:05:34.843270350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:05:34.869036 containerd[1483]: time="2025-08-13T00:05:34.868489510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:05:34.869036 containerd[1483]: time="2025-08-13T00:05:34.868629670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:05:34.872720 containerd[1483]: time="2025-08-13T00:05:34.868683530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:05:34.872720 containerd[1483]: time="2025-08-13T00:05:34.869453700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:05:34.870712 systemd[1]: Started cri-containerd-fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d.scope - libcontainer container fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d. Aug 13 00:05:34.905719 systemd[1]: Started cri-containerd-00dd64f9bc173266576f95a6365d38d33e19b88521197a1f3e486edbc0fb20b0.scope - libcontainer container 00dd64f9bc173266576f95a6365d38d33e19b88521197a1f3e486edbc0fb20b0. Aug 13 00:05:34.914929 containerd[1483]: time="2025-08-13T00:05:34.914901360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gwhs6,Uid:a55ef678-4cfd-4ae6-9805-0df4627ac65c,Namespace:kube-system,Attempt:0,} returns sandbox id \"fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d\"" Aug 13 00:05:34.915717 kubelet[2606]: E0813 00:05:34.915695 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:34.941541 containerd[1483]: time="2025-08-13T00:05:34.941516800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cph2s,Uid:f76c78fc-bad0-4732-90f1-48a54552f45a,Namespace:kube-system,Attempt:0,} returns sandbox id \"00dd64f9bc173266576f95a6365d38d33e19b88521197a1f3e486edbc0fb20b0\"" Aug 13 00:05:34.942403 kubelet[2606]: E0813 00:05:34.942388 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:34.946179 containerd[1483]: time="2025-08-13T00:05:34.946124210Z" level=info msg="CreateContainer within sandbox \"00dd64f9bc173266576f95a6365d38d33e19b88521197a1f3e486edbc0fb20b0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:05:34.958670 containerd[1483]: time="2025-08-13T00:05:34.958258430Z" level=info msg="CreateContainer within sandbox \"00dd64f9bc173266576f95a6365d38d33e19b88521197a1f3e486edbc0fb20b0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7804cf291a4a56b28150c6d68ced48e482c28deeb964a035bdccfd48b7bc1d1d\"" Aug 13 00:05:34.958996 containerd[1483]: time="2025-08-13T00:05:34.958933520Z" level=info msg="StartContainer for \"7804cf291a4a56b28150c6d68ced48e482c28deeb964a035bdccfd48b7bc1d1d\"" Aug 13 00:05:34.990678 systemd[1]: Started cri-containerd-7804cf291a4a56b28150c6d68ced48e482c28deeb964a035bdccfd48b7bc1d1d.scope - libcontainer container 7804cf291a4a56b28150c6d68ced48e482c28deeb964a035bdccfd48b7bc1d1d. Aug 13 00:05:35.025389 containerd[1483]: time="2025-08-13T00:05:35.025192520Z" level=info msg="StartContainer for \"7804cf291a4a56b28150c6d68ced48e482c28deeb964a035bdccfd48b7bc1d1d\" returns successfully" Aug 13 00:05:35.530266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3117885561.mount: Deactivated successfully. Aug 13 00:05:35.897358 kubelet[2606]: E0813 00:05:35.896885 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:35.929673 kubelet[2606]: I0813 00:05:35.929271 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cph2s" podStartSLOduration=2.92925027 podStartE2EDuration="2.92925027s" podCreationTimestamp="2025-08-13 00:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:05:35.91319843 +0000 UTC m=+7.165392361" watchObservedRunningTime="2025-08-13 00:05:35.92925027 +0000 UTC m=+7.181444201" Aug 13 00:05:36.164391 containerd[1483]: time="2025-08-13T00:05:36.164264600Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:05:36.165105 containerd[1483]: time="2025-08-13T00:05:36.165037990Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 13 00:05:36.166830 containerd[1483]: time="2025-08-13T00:05:36.165821200Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:05:36.168257 containerd[1483]: time="2025-08-13T00:05:36.168217140Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.43284151s" Aug 13 00:05:36.168342 containerd[1483]: time="2025-08-13T00:05:36.168261040Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 00:05:36.169502 containerd[1483]: time="2025-08-13T00:05:36.169469320Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 00:05:36.171847 containerd[1483]: time="2025-08-13T00:05:36.171779140Z" level=info msg="CreateContainer within sandbox \"1f293f7de4a2748a415fabca4226d2ae72f132b79501af86f6893591e2d92c48\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 00:05:36.188764 containerd[1483]: time="2025-08-13T00:05:36.188332570Z" level=info msg="CreateContainer within sandbox \"1f293f7de4a2748a415fabca4226d2ae72f132b79501af86f6893591e2d92c48\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2f25ec68c71b899c280fea3eb22b9d95d1cf42a2f2533d830b62d3cc0c603205\"" Aug 13 00:05:36.189185 containerd[1483]: time="2025-08-13T00:05:36.189164390Z" level=info msg="StartContainer for \"2f25ec68c71b899c280fea3eb22b9d95d1cf42a2f2533d830b62d3cc0c603205\"" Aug 13 00:05:36.229696 systemd[1]: Started cri-containerd-2f25ec68c71b899c280fea3eb22b9d95d1cf42a2f2533d830b62d3cc0c603205.scope - libcontainer container 2f25ec68c71b899c280fea3eb22b9d95d1cf42a2f2533d830b62d3cc0c603205. Aug 13 00:05:36.261588 containerd[1483]: time="2025-08-13T00:05:36.261274620Z" level=info msg="StartContainer for \"2f25ec68c71b899c280fea3eb22b9d95d1cf42a2f2533d830b62d3cc0c603205\" returns successfully" Aug 13 00:05:36.924177 kubelet[2606]: E0813 00:05:36.921950 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:37.198241 kubelet[2606]: E0813 00:05:37.198109 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:37.212512 kubelet[2606]: I0813 00:05:37.212468 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" podStartSLOduration=1.7774793 podStartE2EDuration="3.21245187s" podCreationTimestamp="2025-08-13 00:05:34 +0000 UTC" firstStartedPulling="2025-08-13 00:05:34.73433161 +0000 UTC m=+5.986525541" lastFinishedPulling="2025-08-13 00:05:36.16930418 +0000 UTC m=+7.421498111" observedRunningTime="2025-08-13 00:05:36.97402046 +0000 UTC m=+8.226214401" watchObservedRunningTime="2025-08-13 00:05:37.21245187 +0000 UTC m=+8.464645801" Aug 13 00:05:37.923792 kubelet[2606]: E0813 00:05:37.923745 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:37.924502 kubelet[2606]: E0813 00:05:37.924182 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:38.014173 kubelet[2606]: E0813 00:05:38.013577 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:39.312664 systemd-resolved[1387]: Clock change detected. Flushing caches. Aug 13 00:05:39.312782 systemd-timesyncd[1388]: Contacted time server [2600:1700:5455:a70::7b:1]:123 (2.flatcar.pool.ntp.org). Aug 13 00:05:39.312827 systemd-timesyncd[1388]: Initial clock synchronization to Wed 2025-08-13 00:05:39.312586 UTC. Aug 13 00:05:39.372466 kubelet[2606]: E0813 00:05:39.372027 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:39.711821 kubelet[2606]: E0813 00:05:39.711709 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:39.712382 kubelet[2606]: E0813 00:05:39.712354 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:40.302858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3823220435.mount: Deactivated successfully. Aug 13 00:05:41.852729 containerd[1483]: time="2025-08-13T00:05:41.852673895Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:05:41.853786 containerd[1483]: time="2025-08-13T00:05:41.853745875Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 13 00:05:41.854218 containerd[1483]: time="2025-08-13T00:05:41.854165725Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:05:41.855721 containerd[1483]: time="2025-08-13T00:05:41.855618955Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 4.89955043s" Aug 13 00:05:41.855721 containerd[1483]: time="2025-08-13T00:05:41.855648855Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 00:05:41.859490 containerd[1483]: time="2025-08-13T00:05:41.858043085Z" level=info msg="CreateContainer within sandbox \"fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:05:41.871069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1800387015.mount: Deactivated successfully. Aug 13 00:05:41.871419 containerd[1483]: time="2025-08-13T00:05:41.871252425Z" level=info msg="CreateContainer within sandbox \"fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"81ff16ca070dc3b181a034dbe43d64af63e5f9bce6c49b6e48215e9e424e1834\"" Aug 13 00:05:41.873725 containerd[1483]: time="2025-08-13T00:05:41.872607375Z" level=info msg="StartContainer for \"81ff16ca070dc3b181a034dbe43d64af63e5f9bce6c49b6e48215e9e424e1834\"" Aug 13 00:05:41.916310 systemd[1]: Started cri-containerd-81ff16ca070dc3b181a034dbe43d64af63e5f9bce6c49b6e48215e9e424e1834.scope - libcontainer container 81ff16ca070dc3b181a034dbe43d64af63e5f9bce6c49b6e48215e9e424e1834. Aug 13 00:05:41.945631 containerd[1483]: time="2025-08-13T00:05:41.945602085Z" level=info msg="StartContainer for \"81ff16ca070dc3b181a034dbe43d64af63e5f9bce6c49b6e48215e9e424e1834\" returns successfully" Aug 13 00:05:41.957784 systemd[1]: cri-containerd-81ff16ca070dc3b181a034dbe43d64af63e5f9bce6c49b6e48215e9e424e1834.scope: Deactivated successfully. Aug 13 00:05:41.994827 containerd[1483]: time="2025-08-13T00:05:41.994774785Z" level=info msg="shim disconnected" id=81ff16ca070dc3b181a034dbe43d64af63e5f9bce6c49b6e48215e9e424e1834 namespace=k8s.io Aug 13 00:05:41.994827 containerd[1483]: time="2025-08-13T00:05:41.994824545Z" level=warning msg="cleaning up after shim disconnected" id=81ff16ca070dc3b181a034dbe43d64af63e5f9bce6c49b6e48215e9e424e1834 namespace=k8s.io Aug 13 00:05:41.995101 containerd[1483]: time="2025-08-13T00:05:41.994832645Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:05:42.718809 kubelet[2606]: E0813 00:05:42.718753 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:42.727653 containerd[1483]: time="2025-08-13T00:05:42.727482465Z" level=info msg="CreateContainer within sandbox \"fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:05:42.739581 containerd[1483]: time="2025-08-13T00:05:42.739407485Z" level=info msg="CreateContainer within sandbox \"fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2a81b0ce59fe948e03bcf1b54b5b931ae01e555d240cd17f081c3dfa2bc6c86d\"" Aug 13 00:05:42.742165 containerd[1483]: time="2025-08-13T00:05:42.742001625Z" level=info msg="StartContainer for \"2a81b0ce59fe948e03bcf1b54b5b931ae01e555d240cd17f081c3dfa2bc6c86d\"" Aug 13 00:05:42.775634 systemd[1]: Started cri-containerd-2a81b0ce59fe948e03bcf1b54b5b931ae01e555d240cd17f081c3dfa2bc6c86d.scope - libcontainer container 2a81b0ce59fe948e03bcf1b54b5b931ae01e555d240cd17f081c3dfa2bc6c86d. Aug 13 00:05:42.807767 containerd[1483]: time="2025-08-13T00:05:42.807715285Z" level=info msg="StartContainer for \"2a81b0ce59fe948e03bcf1b54b5b931ae01e555d240cd17f081c3dfa2bc6c86d\" returns successfully" Aug 13 00:05:42.826994 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:05:42.827502 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:05:42.827831 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:05:42.834058 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:05:42.834291 systemd[1]: cri-containerd-2a81b0ce59fe948e03bcf1b54b5b931ae01e555d240cd17f081c3dfa2bc6c86d.scope: Deactivated successfully. Aug 13 00:05:42.862578 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:05:42.863725 containerd[1483]: time="2025-08-13T00:05:42.863266305Z" level=info msg="shim disconnected" id=2a81b0ce59fe948e03bcf1b54b5b931ae01e555d240cd17f081c3dfa2bc6c86d namespace=k8s.io Aug 13 00:05:42.863725 containerd[1483]: time="2025-08-13T00:05:42.863620985Z" level=warning msg="cleaning up after shim disconnected" id=2a81b0ce59fe948e03bcf1b54b5b931ae01e555d240cd17f081c3dfa2bc6c86d namespace=k8s.io Aug 13 00:05:42.863725 containerd[1483]: time="2025-08-13T00:05:42.863660385Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:05:42.869020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81ff16ca070dc3b181a034dbe43d64af63e5f9bce6c49b6e48215e9e424e1834-rootfs.mount: Deactivated successfully. Aug 13 00:05:43.722596 kubelet[2606]: E0813 00:05:43.721995 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:43.729141 containerd[1483]: time="2025-08-13T00:05:43.727803005Z" level=info msg="CreateContainer within sandbox \"fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:05:43.755866 containerd[1483]: time="2025-08-13T00:05:43.755809875Z" level=info msg="CreateContainer within sandbox \"fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7e144456f8d68e404cb17cc48f0b68189bd847bc82b9d8a5ee159109124572df\"" Aug 13 00:05:43.757337 containerd[1483]: time="2025-08-13T00:05:43.757292505Z" level=info msg="StartContainer for \"7e144456f8d68e404cb17cc48f0b68189bd847bc82b9d8a5ee159109124572df\"" Aug 13 00:05:43.797264 systemd[1]: Started cri-containerd-7e144456f8d68e404cb17cc48f0b68189bd847bc82b9d8a5ee159109124572df.scope - libcontainer container 7e144456f8d68e404cb17cc48f0b68189bd847bc82b9d8a5ee159109124572df. Aug 13 00:05:43.833783 containerd[1483]: time="2025-08-13T00:05:43.833274995Z" level=info msg="StartContainer for \"7e144456f8d68e404cb17cc48f0b68189bd847bc82b9d8a5ee159109124572df\" returns successfully" Aug 13 00:05:43.840462 systemd[1]: cri-containerd-7e144456f8d68e404cb17cc48f0b68189bd847bc82b9d8a5ee159109124572df.scope: Deactivated successfully. Aug 13 00:05:43.863413 containerd[1483]: time="2025-08-13T00:05:43.863338045Z" level=info msg="shim disconnected" id=7e144456f8d68e404cb17cc48f0b68189bd847bc82b9d8a5ee159109124572df namespace=k8s.io Aug 13 00:05:43.863413 containerd[1483]: time="2025-08-13T00:05:43.863395105Z" level=warning msg="cleaning up after shim disconnected" id=7e144456f8d68e404cb17cc48f0b68189bd847bc82b9d8a5ee159109124572df namespace=k8s.io Aug 13 00:05:43.863413 containerd[1483]: time="2025-08-13T00:05:43.863406725Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:05:43.869348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e144456f8d68e404cb17cc48f0b68189bd847bc82b9d8a5ee159109124572df-rootfs.mount: Deactivated successfully. Aug 13 00:05:44.637591 update_engine[1465]: I20250813 00:05:44.637501 1465 update_attempter.cc:509] Updating boot flags... Aug 13 00:05:44.685155 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (3245) Aug 13 00:05:44.739566 kubelet[2606]: E0813 00:05:44.738367 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:44.743587 containerd[1483]: time="2025-08-13T00:05:44.742941275Z" level=info msg="CreateContainer within sandbox \"fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:05:44.757092 containerd[1483]: time="2025-08-13T00:05:44.757061225Z" level=info msg="CreateContainer within sandbox \"fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f41199ff6e9e506098023a58ebc3b051fe8133d1c554f8dc57ec7d27f74e6afc\"" Aug 13 00:05:44.758700 containerd[1483]: time="2025-08-13T00:05:44.758642855Z" level=info msg="StartContainer for \"f41199ff6e9e506098023a58ebc3b051fe8133d1c554f8dc57ec7d27f74e6afc\"" Aug 13 00:05:44.804084 systemd[1]: Started cri-containerd-f41199ff6e9e506098023a58ebc3b051fe8133d1c554f8dc57ec7d27f74e6afc.scope - libcontainer container f41199ff6e9e506098023a58ebc3b051fe8133d1c554f8dc57ec7d27f74e6afc. Aug 13 00:05:44.808647 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (3247) Aug 13 00:05:44.883244 systemd[1]: cri-containerd-f41199ff6e9e506098023a58ebc3b051fe8133d1c554f8dc57ec7d27f74e6afc.scope: Deactivated successfully. Aug 13 00:05:44.889854 containerd[1483]: time="2025-08-13T00:05:44.888814625Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda55ef678_4cfd_4ae6_9805_0df4627ac65c.slice/cri-containerd-f41199ff6e9e506098023a58ebc3b051fe8133d1c554f8dc57ec7d27f74e6afc.scope/memory.events\": no such file or directory" Aug 13 00:05:44.892675 containerd[1483]: time="2025-08-13T00:05:44.892534245Z" level=info msg="StartContainer for \"f41199ff6e9e506098023a58ebc3b051fe8133d1c554f8dc57ec7d27f74e6afc\" returns successfully" Aug 13 00:05:44.913194 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (3247) Aug 13 00:05:44.917617 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f41199ff6e9e506098023a58ebc3b051fe8133d1c554f8dc57ec7d27f74e6afc-rootfs.mount: Deactivated successfully. Aug 13 00:05:44.929525 containerd[1483]: time="2025-08-13T00:05:44.929369225Z" level=info msg="shim disconnected" id=f41199ff6e9e506098023a58ebc3b051fe8133d1c554f8dc57ec7d27f74e6afc namespace=k8s.io Aug 13 00:05:44.929945 containerd[1483]: time="2025-08-13T00:05:44.929800755Z" level=warning msg="cleaning up after shim disconnected" id=f41199ff6e9e506098023a58ebc3b051fe8133d1c554f8dc57ec7d27f74e6afc namespace=k8s.io Aug 13 00:05:44.929945 containerd[1483]: time="2025-08-13T00:05:44.929831295Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:05:45.740568 kubelet[2606]: E0813 00:05:45.740487 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:45.743059 containerd[1483]: time="2025-08-13T00:05:45.743022725Z" level=info msg="CreateContainer within sandbox \"fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:05:45.772350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2386736994.mount: Deactivated successfully. Aug 13 00:05:45.773055 containerd[1483]: time="2025-08-13T00:05:45.773016795Z" level=info msg="CreateContainer within sandbox \"fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9178212477e8e318cc45fe6e1788feb933851e798b3e57d8687c5a6cc4abc877\"" Aug 13 00:05:45.775615 containerd[1483]: time="2025-08-13T00:05:45.774752125Z" level=info msg="StartContainer for \"9178212477e8e318cc45fe6e1788feb933851e798b3e57d8687c5a6cc4abc877\"" Aug 13 00:05:45.808262 systemd[1]: Started cri-containerd-9178212477e8e318cc45fe6e1788feb933851e798b3e57d8687c5a6cc4abc877.scope - libcontainer container 9178212477e8e318cc45fe6e1788feb933851e798b3e57d8687c5a6cc4abc877. Aug 13 00:05:45.846712 containerd[1483]: time="2025-08-13T00:05:45.846637055Z" level=info msg="StartContainer for \"9178212477e8e318cc45fe6e1788feb933851e798b3e57d8687c5a6cc4abc877\" returns successfully" Aug 13 00:05:45.991277 kubelet[2606]: I0813 00:05:45.991092 2606 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 00:05:46.021695 systemd[1]: Created slice kubepods-burstable-pod179015d1_16c9_4b7c_9794_6a7a529774bc.slice - libcontainer container kubepods-burstable-pod179015d1_16c9_4b7c_9794_6a7a529774bc.slice. Aug 13 00:05:46.034247 systemd[1]: Created slice kubepods-burstable-pod062a7e65_ff00_41b4_96f7_d4b78785e58f.slice - libcontainer container kubepods-burstable-pod062a7e65_ff00_41b4_96f7_d4b78785e58f.slice. Aug 13 00:05:46.039975 kubelet[2606]: I0813 00:05:46.039858 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/062a7e65-ff00-41b4-96f7-d4b78785e58f-config-volume\") pod \"coredns-668d6bf9bc-dssvx\" (UID: \"062a7e65-ff00-41b4-96f7-d4b78785e58f\") " pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:05:46.039975 kubelet[2606]: I0813 00:05:46.039894 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvbvp\" (UniqueName: \"kubernetes.io/projected/062a7e65-ff00-41b4-96f7-d4b78785e58f-kube-api-access-rvbvp\") pod \"coredns-668d6bf9bc-dssvx\" (UID: \"062a7e65-ff00-41b4-96f7-d4b78785e58f\") " pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:05:46.039975 kubelet[2606]: I0813 00:05:46.039913 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fg8l\" (UniqueName: \"kubernetes.io/projected/179015d1-16c9-4b7c-9794-6a7a529774bc-kube-api-access-7fg8l\") pod \"coredns-668d6bf9bc-ztmgp\" (UID: \"179015d1-16c9-4b7c-9794-6a7a529774bc\") " pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:05:46.039975 kubelet[2606]: I0813 00:05:46.039931 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/179015d1-16c9-4b7c-9794-6a7a529774bc-config-volume\") pod \"coredns-668d6bf9bc-ztmgp\" (UID: \"179015d1-16c9-4b7c-9794-6a7a529774bc\") " pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:05:46.330827 kubelet[2606]: E0813 00:05:46.330574 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:46.332715 containerd[1483]: time="2025-08-13T00:05:46.332363415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ztmgp,Uid:179015d1-16c9-4b7c-9794-6a7a529774bc,Namespace:kube-system,Attempt:0,}" Aug 13 00:05:46.337474 kubelet[2606]: E0813 00:05:46.337422 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:46.337838 containerd[1483]: time="2025-08-13T00:05:46.337809065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dssvx,Uid:062a7e65-ff00-41b4-96f7-d4b78785e58f,Namespace:kube-system,Attempt:0,}" Aug 13 00:05:46.746285 kubelet[2606]: E0813 00:05:46.746087 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:46.763879 kubelet[2606]: I0813 00:05:46.762607 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gwhs6" podStartSLOduration=7.610989555 podStartE2EDuration="13.762588135s" podCreationTimestamp="2025-08-13 00:05:33 +0000 UTC" firstStartedPulling="2025-08-13 00:05:34.91874925 +0000 UTC m=+6.170943181" lastFinishedPulling="2025-08-13 00:05:41.856912895 +0000 UTC m=+12.322541761" observedRunningTime="2025-08-13 00:05:46.761655565 +0000 UTC m=+17.227284431" watchObservedRunningTime="2025-08-13 00:05:46.762588135 +0000 UTC m=+17.228217001" Aug 13 00:05:47.748380 kubelet[2606]: E0813 00:05:47.748322 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:48.108494 systemd-networkd[1386]: cilium_host: Link UP Aug 13 00:05:48.109926 systemd-networkd[1386]: cilium_net: Link UP Aug 13 00:05:48.110199 systemd-networkd[1386]: cilium_net: Gained carrier Aug 13 00:05:48.110385 systemd-networkd[1386]: cilium_host: Gained carrier Aug 13 00:05:48.231188 systemd-networkd[1386]: cilium_vxlan: Link UP Aug 13 00:05:48.231197 systemd-networkd[1386]: cilium_vxlan: Gained carrier Aug 13 00:05:48.449164 kernel: NET: Registered PF_ALG protocol family Aug 13 00:05:48.510302 systemd-networkd[1386]: cilium_host: Gained IPv6LL Aug 13 00:05:48.754793 kubelet[2606]: E0813 00:05:48.754289 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:49.086772 systemd-networkd[1386]: cilium_net: Gained IPv6LL Aug 13 00:05:49.169956 systemd-networkd[1386]: lxc_health: Link UP Aug 13 00:05:49.173595 systemd-networkd[1386]: lxc_health: Gained carrier Aug 13 00:05:49.278820 systemd-networkd[1386]: cilium_vxlan: Gained IPv6LL Aug 13 00:05:49.397218 kernel: eth0: renamed from tmp8261e Aug 13 00:05:49.395547 systemd-networkd[1386]: lxc88d5f8eeadaf: Link UP Aug 13 00:05:49.406297 systemd-networkd[1386]: lxc88d5f8eeadaf: Gained carrier Aug 13 00:05:49.433183 kernel: eth0: renamed from tmpf71ab Aug 13 00:05:49.438476 systemd-networkd[1386]: lxcc8788dc26e29: Link UP Aug 13 00:05:49.442524 systemd-networkd[1386]: lxcc8788dc26e29: Gained carrier Aug 13 00:05:49.759715 kubelet[2606]: E0813 00:05:49.759558 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:49.800861 kubelet[2606]: I0813 00:05:49.800581 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:05:49.801389 kubelet[2606]: I0813 00:05:49.801357 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:05:49.803780 kubelet[2606]: I0813 00:05:49.803749 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:05:49.822758 kubelet[2606]: I0813 00:05:49.821573 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:05:49.822758 kubelet[2606]: I0813 00:05:49.821829 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:05:49.822758 kubelet[2606]: E0813 00:05:49.821888 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:05:49.822758 kubelet[2606]: E0813 00:05:49.821898 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:05:49.822758 kubelet[2606]: E0813 00:05:49.821909 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:05:49.822758 kubelet[2606]: E0813 00:05:49.821919 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:05:49.822758 kubelet[2606]: E0813 00:05:49.821930 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:05:49.822758 kubelet[2606]: E0813 00:05:49.821939 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:05:49.822758 kubelet[2606]: E0813 00:05:49.822157 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:05:49.822758 kubelet[2606]: E0813 00:05:49.822165 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:05:49.822758 kubelet[2606]: I0813 00:05:49.822175 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:05:50.762783 kubelet[2606]: E0813 00:05:50.761543 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:50.945510 systemd-networkd[1386]: lxc88d5f8eeadaf: Gained IPv6LL Aug 13 00:05:51.070358 systemd-networkd[1386]: lxc_health: Gained IPv6LL Aug 13 00:05:51.390887 systemd-networkd[1386]: lxcc8788dc26e29: Gained IPv6LL Aug 13 00:05:51.769290 kubelet[2606]: E0813 00:05:51.768855 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:52.692567 containerd[1483]: time="2025-08-13T00:05:52.692110905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:05:52.692567 containerd[1483]: time="2025-08-13T00:05:52.692335875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:05:52.692567 containerd[1483]: time="2025-08-13T00:05:52.692353455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:05:52.692567 containerd[1483]: time="2025-08-13T00:05:52.692453335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:05:52.753471 systemd[1]: Started cri-containerd-f71ab2c72bd15eb195e796baa54fbdef9cc29be227dbb83c00eea8943d5c3886.scope - libcontainer container f71ab2c72bd15eb195e796baa54fbdef9cc29be227dbb83c00eea8943d5c3886. Aug 13 00:05:52.771676 containerd[1483]: time="2025-08-13T00:05:52.767479625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:05:52.771676 containerd[1483]: time="2025-08-13T00:05:52.770004575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:05:52.771676 containerd[1483]: time="2025-08-13T00:05:52.770023275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:05:52.773407 containerd[1483]: time="2025-08-13T00:05:52.773196935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:05:52.815273 systemd[1]: Started cri-containerd-8261e2161d0cb7e8207e30646463af31f541c689298bfc6fa0dd1b1f33af458b.scope - libcontainer container 8261e2161d0cb7e8207e30646463af31f541c689298bfc6fa0dd1b1f33af458b. Aug 13 00:05:52.837984 containerd[1483]: time="2025-08-13T00:05:52.837835235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dssvx,Uid:062a7e65-ff00-41b4-96f7-d4b78785e58f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f71ab2c72bd15eb195e796baa54fbdef9cc29be227dbb83c00eea8943d5c3886\"" Aug 13 00:05:52.838862 kubelet[2606]: E0813 00:05:52.838842 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:52.842433 containerd[1483]: time="2025-08-13T00:05:52.842378505Z" level=info msg="CreateContainer within sandbox \"f71ab2c72bd15eb195e796baa54fbdef9cc29be227dbb83c00eea8943d5c3886\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:05:52.859260 containerd[1483]: time="2025-08-13T00:05:52.859210905Z" level=info msg="CreateContainer within sandbox \"f71ab2c72bd15eb195e796baa54fbdef9cc29be227dbb83c00eea8943d5c3886\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3ddbaef6b85d14cc8ca165096778fbd0db8a2d95e3d33794b1d175d27c1f0e6d\"" Aug 13 00:05:52.859785 containerd[1483]: time="2025-08-13T00:05:52.859752655Z" level=info msg="StartContainer for \"3ddbaef6b85d14cc8ca165096778fbd0db8a2d95e3d33794b1d175d27c1f0e6d\"" Aug 13 00:05:52.885349 containerd[1483]: time="2025-08-13T00:05:52.885147225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ztmgp,Uid:179015d1-16c9-4b7c-9794-6a7a529774bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"8261e2161d0cb7e8207e30646463af31f541c689298bfc6fa0dd1b1f33af458b\"" Aug 13 00:05:52.887080 kubelet[2606]: E0813 00:05:52.886817 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:52.893837 containerd[1483]: time="2025-08-13T00:05:52.893791475Z" level=info msg="CreateContainer within sandbox \"8261e2161d0cb7e8207e30646463af31f541c689298bfc6fa0dd1b1f33af458b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:05:52.917275 systemd[1]: Started cri-containerd-3ddbaef6b85d14cc8ca165096778fbd0db8a2d95e3d33794b1d175d27c1f0e6d.scope - libcontainer container 3ddbaef6b85d14cc8ca165096778fbd0db8a2d95e3d33794b1d175d27c1f0e6d. Aug 13 00:05:52.919613 containerd[1483]: time="2025-08-13T00:05:52.919571365Z" level=info msg="CreateContainer within sandbox \"8261e2161d0cb7e8207e30646463af31f541c689298bfc6fa0dd1b1f33af458b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6883c995d0bc1589630c28c220868a85514ea32cbb740ea9196919bfc5bc948e\"" Aug 13 00:05:52.926044 containerd[1483]: time="2025-08-13T00:05:52.925390835Z" level=info msg="StartContainer for \"6883c995d0bc1589630c28c220868a85514ea32cbb740ea9196919bfc5bc948e\"" Aug 13 00:05:52.967440 systemd[1]: Started cri-containerd-6883c995d0bc1589630c28c220868a85514ea32cbb740ea9196919bfc5bc948e.scope - libcontainer container 6883c995d0bc1589630c28c220868a85514ea32cbb740ea9196919bfc5bc948e. Aug 13 00:05:52.976440 containerd[1483]: time="2025-08-13T00:05:52.975706995Z" level=info msg="StartContainer for \"3ddbaef6b85d14cc8ca165096778fbd0db8a2d95e3d33794b1d175d27c1f0e6d\" returns successfully" Aug 13 00:05:53.009621 containerd[1483]: time="2025-08-13T00:05:53.009565525Z" level=info msg="StartContainer for \"6883c995d0bc1589630c28c220868a85514ea32cbb740ea9196919bfc5bc948e\" returns successfully" Aug 13 00:05:53.777734 kubelet[2606]: E0813 00:05:53.776202 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:53.780059 kubelet[2606]: E0813 00:05:53.780040 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:53.788605 kubelet[2606]: I0813 00:05:53.788471 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ztmgp" podStartSLOduration=19.788459115 podStartE2EDuration="19.788459115s" podCreationTimestamp="2025-08-13 00:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:05:53.788312645 +0000 UTC m=+24.253941511" watchObservedRunningTime="2025-08-13 00:05:53.788459115 +0000 UTC m=+24.254087991" Aug 13 00:05:53.807025 kubelet[2606]: I0813 00:05:53.806240 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dssvx" podStartSLOduration=19.806220305 podStartE2EDuration="19.806220305s" podCreationTimestamp="2025-08-13 00:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:05:53.804067375 +0000 UTC m=+24.269696241" watchObservedRunningTime="2025-08-13 00:05:53.806220305 +0000 UTC m=+24.271849171" Aug 13 00:05:54.782320 kubelet[2606]: E0813 00:05:54.782164 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:54.782320 kubelet[2606]: E0813 00:05:54.782172 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:55.783753 kubelet[2606]: E0813 00:05:55.783645 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:55.783753 kubelet[2606]: E0813 00:05:55.783694 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:05:59.841924 kubelet[2606]: I0813 00:05:59.841887 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:05:59.841924 kubelet[2606]: I0813 00:05:59.841925 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:05:59.843961 kubelet[2606]: I0813 00:05:59.843946 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:05:59.854067 kubelet[2606]: I0813 00:05:59.854053 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:05:59.854579 kubelet[2606]: I0813 00:05:59.854561 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:05:59.854623 kubelet[2606]: E0813 00:05:59.854596 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:05:59.854623 kubelet[2606]: E0813 00:05:59.854608 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:05:59.854623 kubelet[2606]: E0813 00:05:59.854615 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:05:59.854623 kubelet[2606]: E0813 00:05:59.854623 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:05:59.854706 kubelet[2606]: E0813 00:05:59.854631 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:05:59.854706 kubelet[2606]: E0813 00:05:59.854639 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:05:59.854706 kubelet[2606]: E0813 00:05:59.854646 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:05:59.854706 kubelet[2606]: E0813 00:05:59.854654 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:05:59.854706 kubelet[2606]: I0813 00:05:59.854663 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:06:09.870918 kubelet[2606]: I0813 00:06:09.870858 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:06:09.870918 kubelet[2606]: I0813 00:06:09.870901 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:06:09.872963 kubelet[2606]: I0813 00:06:09.872949 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:06:09.884068 kubelet[2606]: I0813 00:06:09.884043 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:06:09.884228 kubelet[2606]: I0813 00:06:09.884177 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:06:09.884228 kubelet[2606]: E0813 00:06:09.884209 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:06:09.884228 kubelet[2606]: E0813 00:06:09.884220 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:06:09.884228 kubelet[2606]: E0813 00:06:09.884228 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:06:09.884364 kubelet[2606]: E0813 00:06:09.884238 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:06:09.884364 kubelet[2606]: E0813 00:06:09.884246 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:06:09.884364 kubelet[2606]: E0813 00:06:09.884254 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:06:09.884364 kubelet[2606]: E0813 00:06:09.884263 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:06:09.884364 kubelet[2606]: E0813 00:06:09.884270 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:06:09.884364 kubelet[2606]: I0813 00:06:09.884278 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:06:19.907632 kubelet[2606]: I0813 00:06:19.907584 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:06:19.907632 kubelet[2606]: I0813 00:06:19.907636 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:06:19.909562 kubelet[2606]: I0813 00:06:19.909527 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:06:19.921715 kubelet[2606]: I0813 00:06:19.921689 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:06:19.921829 kubelet[2606]: I0813 00:06:19.921798 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:06:19.921871 kubelet[2606]: E0813 00:06:19.921848 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:06:19.921871 kubelet[2606]: E0813 00:06:19.921861 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:06:19.921871 kubelet[2606]: E0813 00:06:19.921869 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:06:19.921999 kubelet[2606]: E0813 00:06:19.921880 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:06:19.921999 kubelet[2606]: E0813 00:06:19.921888 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:06:19.921999 kubelet[2606]: E0813 00:06:19.921897 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:06:19.921999 kubelet[2606]: E0813 00:06:19.921905 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:06:19.921999 kubelet[2606]: E0813 00:06:19.921913 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:06:19.921999 kubelet[2606]: I0813 00:06:19.921921 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:06:29.940219 kubelet[2606]: I0813 00:06:29.940012 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:06:29.940219 kubelet[2606]: I0813 00:06:29.940051 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:06:29.941942 kubelet[2606]: I0813 00:06:29.941926 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:06:29.952597 kubelet[2606]: I0813 00:06:29.952581 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:06:29.952697 kubelet[2606]: I0813 00:06:29.952680 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:06:29.952730 kubelet[2606]: E0813 00:06:29.952711 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:06:29.952730 kubelet[2606]: E0813 00:06:29.952723 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:06:29.952791 kubelet[2606]: E0813 00:06:29.952731 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:06:29.952791 kubelet[2606]: E0813 00:06:29.952741 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:06:29.952791 kubelet[2606]: E0813 00:06:29.952749 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:06:29.952791 kubelet[2606]: E0813 00:06:29.952757 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:06:29.952791 kubelet[2606]: E0813 00:06:29.952766 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:06:29.952791 kubelet[2606]: E0813 00:06:29.952774 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:06:29.952791 kubelet[2606]: I0813 00:06:29.952783 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:06:39.971268 kubelet[2606]: I0813 00:06:39.971200 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:06:39.971268 kubelet[2606]: I0813 00:06:39.971242 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:06:39.974797 kubelet[2606]: I0813 00:06:39.974401 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:06:39.989604 kubelet[2606]: I0813 00:06:39.989579 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:06:39.989720 kubelet[2606]: I0813 00:06:39.989699 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-proxy-cph2s","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:06:39.989753 kubelet[2606]: E0813 00:06:39.989732 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:06:39.989753 kubelet[2606]: E0813 00:06:39.989743 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:06:39.989753 kubelet[2606]: E0813 00:06:39.989751 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:06:39.989827 kubelet[2606]: E0813 00:06:39.989760 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:06:39.989827 kubelet[2606]: E0813 00:06:39.989768 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:06:39.989827 kubelet[2606]: E0813 00:06:39.989776 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:06:39.989827 kubelet[2606]: E0813 00:06:39.989784 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:06:39.989827 kubelet[2606]: E0813 00:06:39.989794 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:06:39.989827 kubelet[2606]: I0813 00:06:39.989803 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:06:40.633885 kubelet[2606]: E0813 00:06:40.633847 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:06:41.634325 kubelet[2606]: E0813 00:06:41.633574 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:06:42.634001 kubelet[2606]: E0813 00:06:42.633955 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:06:50.007463 kubelet[2606]: I0813 00:06:50.007433 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:06:50.007463 kubelet[2606]: I0813 00:06:50.007469 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:06:50.008661 kubelet[2606]: I0813 00:06:50.008590 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:06:50.019243 kubelet[2606]: I0813 00:06:50.019213 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:06:50.019331 kubelet[2606]: I0813 00:06:50.019300 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:06:50.019361 kubelet[2606]: E0813 00:06:50.019331 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:06:50.019361 kubelet[2606]: E0813 00:06:50.019344 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:06:50.019361 kubelet[2606]: E0813 00:06:50.019353 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:06:50.019457 kubelet[2606]: E0813 00:06:50.019363 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:06:50.019457 kubelet[2606]: E0813 00:06:50.019371 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:06:50.019457 kubelet[2606]: E0813 00:06:50.019380 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:06:50.019457 kubelet[2606]: E0813 00:06:50.019388 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:06:50.019457 kubelet[2606]: E0813 00:06:50.019397 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:06:50.019457 kubelet[2606]: I0813 00:06:50.019406 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:06:55.634912 kubelet[2606]: E0813 00:06:55.633651 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:07:00.044465 kubelet[2606]: I0813 00:07:00.044421 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:07:00.044465 kubelet[2606]: I0813 00:07:00.044479 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:07:00.046791 kubelet[2606]: I0813 00:07:00.046746 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:07:00.060013 kubelet[2606]: I0813 00:07:00.059982 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:07:00.060211 kubelet[2606]: I0813 00:07:00.060113 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-dssvx","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:07:00.060211 kubelet[2606]: E0813 00:07:00.060191 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:07:00.060211 kubelet[2606]: E0813 00:07:00.060207 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:07:00.060333 kubelet[2606]: E0813 00:07:00.060216 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:07:00.060333 kubelet[2606]: E0813 00:07:00.060227 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:07:00.060333 kubelet[2606]: E0813 00:07:00.060236 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:07:00.060333 kubelet[2606]: E0813 00:07:00.060245 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:07:00.060333 kubelet[2606]: E0813 00:07:00.060253 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:07:00.060333 kubelet[2606]: E0813 00:07:00.060262 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:07:00.060333 kubelet[2606]: I0813 00:07:00.060272 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:07:03.634694 kubelet[2606]: E0813 00:07:03.633790 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:07:05.636112 kubelet[2606]: E0813 00:07:05.636071 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:07:08.634689 kubelet[2606]: E0813 00:07:08.634631 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:07:10.076326 kubelet[2606]: I0813 00:07:10.076283 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:07:10.076326 kubelet[2606]: I0813 00:07:10.076323 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:07:10.079294 kubelet[2606]: I0813 00:07:10.079278 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:07:10.088939 kubelet[2606]: I0813 00:07:10.088922 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:07:10.089052 kubelet[2606]: I0813 00:07:10.089036 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-proxy-cph2s","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:07:10.089097 kubelet[2606]: E0813 00:07:10.089067 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:07:10.089097 kubelet[2606]: E0813 00:07:10.089078 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:07:10.089097 kubelet[2606]: E0813 00:07:10.089086 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:07:10.089097 kubelet[2606]: E0813 00:07:10.089096 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:07:10.089256 kubelet[2606]: E0813 00:07:10.089104 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:07:10.089256 kubelet[2606]: E0813 00:07:10.089112 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:07:10.089256 kubelet[2606]: E0813 00:07:10.089138 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:07:10.089256 kubelet[2606]: E0813 00:07:10.089146 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:07:10.089256 kubelet[2606]: I0813 00:07:10.089155 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:07:20.109408 kubelet[2606]: I0813 00:07:20.109369 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:07:20.109408 kubelet[2606]: I0813 00:07:20.109410 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:07:20.112780 kubelet[2606]: I0813 00:07:20.112752 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:07:20.122482 kubelet[2606]: I0813 00:07:20.122460 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:07:20.122583 kubelet[2606]: I0813 00:07:20.122562 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:07:20.122616 kubelet[2606]: E0813 00:07:20.122594 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:07:20.122616 kubelet[2606]: E0813 00:07:20.122606 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:07:20.122616 kubelet[2606]: E0813 00:07:20.122614 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:07:20.122887 kubelet[2606]: E0813 00:07:20.122623 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:07:20.122887 kubelet[2606]: E0813 00:07:20.122630 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:07:20.122887 kubelet[2606]: E0813 00:07:20.122839 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:07:20.122887 kubelet[2606]: E0813 00:07:20.122846 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:07:20.122887 kubelet[2606]: E0813 00:07:20.122855 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:07:20.122887 kubelet[2606]: I0813 00:07:20.122864 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:07:20.634216 kubelet[2606]: E0813 00:07:20.634173 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:07:30.140731 kubelet[2606]: I0813 00:07:30.140484 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:07:30.140731 kubelet[2606]: I0813 00:07:30.140526 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:07:30.143770 kubelet[2606]: I0813 00:07:30.143739 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:07:30.145348 kubelet[2606]: I0813 00:07:30.145316 2606 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136" size=320368 runtimeHandler="" Aug 13 00:07:30.145662 containerd[1483]: time="2025-08-13T00:07:30.145609711Z" level=info msg="RemoveImage \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:07:30.147415 containerd[1483]: time="2025-08-13T00:07:30.147376741Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.10\"" Aug 13 00:07:30.148258 containerd[1483]: time="2025-08-13T00:07:30.148226677Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\"" Aug 13 00:07:30.148917 containerd[1483]: time="2025-08-13T00:07:30.148860793Z" level=info msg="ImageDelete event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:07:30.153294 containerd[1483]: time="2025-08-13T00:07:30.153265409Z" level=info msg="RemoveImage \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" returns successfully" Aug 13 00:07:30.153558 kubelet[2606]: I0813 00:07:30.153513 2606 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" size=57680541 runtimeHandler="" Aug 13 00:07:30.154071 containerd[1483]: time="2025-08-13T00:07:30.153685646Z" level=info msg="RemoveImage \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 00:07:30.154517 containerd[1483]: time="2025-08-13T00:07:30.154490462Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 00:07:30.155020 containerd[1483]: time="2025-08-13T00:07:30.154983399Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\"" Aug 13 00:07:30.155494 containerd[1483]: time="2025-08-13T00:07:30.155373827Z" level=info msg="ImageDelete event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 00:07:30.229443 containerd[1483]: time="2025-08-13T00:07:30.229406819Z" level=info msg="RemoveImage \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" returns successfully" Aug 13 00:07:30.239971 kubelet[2606]: I0813 00:07:30.239935 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:07:30.240098 kubelet[2606]: I0813 00:07:30.240024 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-proxy-cph2s","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:07:30.240098 kubelet[2606]: E0813 00:07:30.240055 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:07:30.240098 kubelet[2606]: E0813 00:07:30.240066 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:07:30.240098 kubelet[2606]: E0813 00:07:30.240076 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:07:30.240098 kubelet[2606]: E0813 00:07:30.240084 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:07:30.240098 kubelet[2606]: E0813 00:07:30.240093 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:07:30.240098 kubelet[2606]: E0813 00:07:30.240102 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:07:30.240295 kubelet[2606]: E0813 00:07:30.240111 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:07:30.240295 kubelet[2606]: E0813 00:07:30.240174 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:07:30.240295 kubelet[2606]: I0813 00:07:30.240186 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:07:31.646343 systemd[1]: Started sshd@7-172.234.216.155:22-139.178.89.65:54300.service - OpenSSH per-connection server daemon (139.178.89.65:54300). Aug 13 00:07:31.968602 sshd[3998]: Accepted publickey for core from 139.178.89.65 port 54300 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:07:31.970013 sshd-session[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:07:31.974235 systemd-logind[1455]: New session 8 of user core. Aug 13 00:07:31.982392 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:07:32.272773 sshd[4000]: Connection closed by 139.178.89.65 port 54300 Aug 13 00:07:32.273644 sshd-session[3998]: pam_unix(sshd:session): session closed for user core Aug 13 00:07:32.276549 systemd[1]: sshd@7-172.234.216.155:22-139.178.89.65:54300.service: Deactivated successfully. Aug 13 00:07:32.278883 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:07:32.280183 systemd-logind[1455]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:07:32.281092 systemd-logind[1455]: Removed session 8. Aug 13 00:07:37.338378 systemd[1]: Started sshd@8-172.234.216.155:22-139.178.89.65:54304.service - OpenSSH per-connection server daemon (139.178.89.65:54304). Aug 13 00:07:37.661342 sshd[4016]: Accepted publickey for core from 139.178.89.65 port 54304 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:07:37.662816 sshd-session[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:07:37.667460 systemd-logind[1455]: New session 9 of user core. Aug 13 00:07:37.675234 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:07:37.957506 sshd[4018]: Connection closed by 139.178.89.65 port 54304 Aug 13 00:07:37.958163 sshd-session[4016]: pam_unix(sshd:session): session closed for user core Aug 13 00:07:37.962749 systemd[1]: sshd@8-172.234.216.155:22-139.178.89.65:54304.service: Deactivated successfully. Aug 13 00:07:37.965036 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:07:37.965966 systemd-logind[1455]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:07:37.968091 systemd-logind[1455]: Removed session 9. Aug 13 00:07:40.257306 kubelet[2606]: I0813 00:07:40.257259 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:07:40.257306 kubelet[2606]: I0813 00:07:40.257309 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:07:40.259588 kubelet[2606]: I0813 00:07:40.259486 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:07:40.271495 kubelet[2606]: I0813 00:07:40.271466 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:07:40.271659 kubelet[2606]: I0813 00:07:40.271571 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:07:40.271659 kubelet[2606]: E0813 00:07:40.271601 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:07:40.271659 kubelet[2606]: E0813 00:07:40.271613 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:07:40.271659 kubelet[2606]: E0813 00:07:40.271622 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:07:40.271659 kubelet[2606]: E0813 00:07:40.271631 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:07:40.271659 kubelet[2606]: E0813 00:07:40.271639 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:07:40.271659 kubelet[2606]: E0813 00:07:40.271648 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:07:40.271659 kubelet[2606]: E0813 00:07:40.271657 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:07:40.271659 kubelet[2606]: E0813 00:07:40.271665 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:07:40.271854 kubelet[2606]: I0813 00:07:40.271674 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:07:43.037379 systemd[1]: Started sshd@9-172.234.216.155:22-139.178.89.65:42482.service - OpenSSH per-connection server daemon (139.178.89.65:42482). Aug 13 00:07:43.362093 sshd[4032]: Accepted publickey for core from 139.178.89.65 port 42482 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:07:43.363625 sshd-session[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:07:43.368314 systemd-logind[1455]: New session 10 of user core. Aug 13 00:07:43.383245 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:07:43.664739 sshd[4034]: Connection closed by 139.178.89.65 port 42482 Aug 13 00:07:43.665649 sshd-session[4032]: pam_unix(sshd:session): session closed for user core Aug 13 00:07:43.669788 systemd[1]: sshd@9-172.234.216.155:22-139.178.89.65:42482.service: Deactivated successfully. Aug 13 00:07:43.672038 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:07:43.672790 systemd-logind[1455]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:07:43.674371 systemd-logind[1455]: Removed session 10. Aug 13 00:07:45.634160 kubelet[2606]: E0813 00:07:45.633323 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:07:48.733344 systemd[1]: Started sshd@10-172.234.216.155:22-139.178.89.65:42498.service - OpenSSH per-connection server daemon (139.178.89.65:42498). Aug 13 00:07:49.065889 sshd[4047]: Accepted publickey for core from 139.178.89.65 port 42498 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:07:49.067610 sshd-session[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:07:49.071766 systemd-logind[1455]: New session 11 of user core. Aug 13 00:07:49.078252 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:07:49.370088 sshd[4049]: Connection closed by 139.178.89.65 port 42498 Aug 13 00:07:49.370887 sshd-session[4047]: pam_unix(sshd:session): session closed for user core Aug 13 00:07:49.374700 systemd[1]: sshd@10-172.234.216.155:22-139.178.89.65:42498.service: Deactivated successfully. Aug 13 00:07:49.376879 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:07:49.378433 systemd-logind[1455]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:07:49.379758 systemd-logind[1455]: Removed session 11. Aug 13 00:07:50.289387 kubelet[2606]: I0813 00:07:50.289343 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:07:50.289387 kubelet[2606]: I0813 00:07:50.289384 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:07:50.291373 kubelet[2606]: I0813 00:07:50.291345 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:07:50.303654 kubelet[2606]: I0813 00:07:50.303616 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:07:50.303799 kubelet[2606]: I0813 00:07:50.303750 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:07:50.303799 kubelet[2606]: E0813 00:07:50.303783 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:07:50.303799 kubelet[2606]: E0813 00:07:50.303794 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:07:50.303898 kubelet[2606]: E0813 00:07:50.303802 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:07:50.303898 kubelet[2606]: E0813 00:07:50.303812 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:07:50.303898 kubelet[2606]: E0813 00:07:50.303821 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:07:50.303898 kubelet[2606]: E0813 00:07:50.303829 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:07:50.303898 kubelet[2606]: E0813 00:07:50.303838 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:07:50.303898 kubelet[2606]: E0813 00:07:50.303846 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:07:50.303898 kubelet[2606]: I0813 00:07:50.303857 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:07:54.433325 systemd[1]: Started sshd@11-172.234.216.155:22-139.178.89.65:45782.service - OpenSSH per-connection server daemon (139.178.89.65:45782). Aug 13 00:07:54.759350 sshd[4062]: Accepted publickey for core from 139.178.89.65 port 45782 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:07:54.760623 sshd-session[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:07:54.764896 systemd-logind[1455]: New session 12 of user core. Aug 13 00:07:54.772263 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:07:55.058873 sshd[4064]: Connection closed by 139.178.89.65 port 45782 Aug 13 00:07:55.059512 sshd-session[4062]: pam_unix(sshd:session): session closed for user core Aug 13 00:07:55.062884 systemd-logind[1455]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:07:55.063668 systemd[1]: sshd@11-172.234.216.155:22-139.178.89.65:45782.service: Deactivated successfully. Aug 13 00:07:55.066362 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:07:55.067199 systemd-logind[1455]: Removed session 12. Aug 13 00:07:55.128362 systemd[1]: Started sshd@12-172.234.216.155:22-139.178.89.65:45794.service - OpenSSH per-connection server daemon (139.178.89.65:45794). Aug 13 00:07:55.467707 sshd[4077]: Accepted publickey for core from 139.178.89.65 port 45794 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:07:55.469477 sshd-session[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:07:55.473537 systemd-logind[1455]: New session 13 of user core. Aug 13 00:07:55.481281 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:07:55.827578 sshd[4079]: Connection closed by 139.178.89.65 port 45794 Aug 13 00:07:55.828424 sshd-session[4077]: pam_unix(sshd:session): session closed for user core Aug 13 00:07:55.832695 systemd[1]: sshd@12-172.234.216.155:22-139.178.89.65:45794.service: Deactivated successfully. Aug 13 00:07:55.834825 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:07:55.836193 systemd-logind[1455]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:07:55.837528 systemd-logind[1455]: Removed session 13. Aug 13 00:07:55.895379 systemd[1]: Started sshd@13-172.234.216.155:22-139.178.89.65:45806.service - OpenSSH per-connection server daemon (139.178.89.65:45806). Aug 13 00:07:56.229109 sshd[4089]: Accepted publickey for core from 139.178.89.65 port 45806 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:07:56.230557 sshd-session[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:07:56.237198 systemd-logind[1455]: New session 14 of user core. Aug 13 00:07:56.240241 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:07:56.537322 sshd[4091]: Connection closed by 139.178.89.65 port 45806 Aug 13 00:07:56.537921 sshd-session[4089]: pam_unix(sshd:session): session closed for user core Aug 13 00:07:56.541974 systemd-logind[1455]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:07:56.542814 systemd[1]: sshd@13-172.234.216.155:22-139.178.89.65:45806.service: Deactivated successfully. Aug 13 00:07:56.545525 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:07:56.546478 systemd-logind[1455]: Removed session 14. Aug 13 00:08:00.323240 kubelet[2606]: I0813 00:08:00.323156 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:08:00.323240 kubelet[2606]: I0813 00:08:00.323197 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:08:00.325271 kubelet[2606]: I0813 00:08:00.325257 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:08:00.337381 kubelet[2606]: I0813 00:08:00.337327 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:08:00.337547 kubelet[2606]: I0813 00:08:00.337528 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:08:00.337615 kubelet[2606]: E0813 00:08:00.337563 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:08:00.337615 kubelet[2606]: E0813 00:08:00.337577 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:08:00.337615 kubelet[2606]: E0813 00:08:00.337586 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:08:00.337615 kubelet[2606]: E0813 00:08:00.337595 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:08:00.337615 kubelet[2606]: E0813 00:08:00.337603 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:08:00.337615 kubelet[2606]: E0813 00:08:00.337611 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:08:00.337615 kubelet[2606]: E0813 00:08:00.337620 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:08:00.337935 kubelet[2606]: E0813 00:08:00.337628 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:08:00.337935 kubelet[2606]: I0813 00:08:00.337639 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:08:01.601339 systemd[1]: Started sshd@14-172.234.216.155:22-139.178.89.65:44326.service - OpenSSH per-connection server daemon (139.178.89.65:44326). Aug 13 00:08:01.925384 sshd[4103]: Accepted publickey for core from 139.178.89.65 port 44326 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:08:01.927683 sshd-session[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:08:01.932843 systemd-logind[1455]: New session 15 of user core. Aug 13 00:08:01.939278 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:08:02.233373 sshd[4105]: Connection closed by 139.178.89.65 port 44326 Aug 13 00:08:02.234181 sshd-session[4103]: pam_unix(sshd:session): session closed for user core Aug 13 00:08:02.237505 systemd[1]: sshd@14-172.234.216.155:22-139.178.89.65:44326.service: Deactivated successfully. Aug 13 00:08:02.239312 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:08:02.240837 systemd-logind[1455]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:08:02.242277 systemd-logind[1455]: Removed session 15. Aug 13 00:08:02.301334 systemd[1]: Started sshd@15-172.234.216.155:22-139.178.89.65:44336.service - OpenSSH per-connection server daemon (139.178.89.65:44336). Aug 13 00:08:02.632012 sshd[4117]: Accepted publickey for core from 139.178.89.65 port 44336 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:08:02.633782 sshd-session[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:08:02.640091 systemd-logind[1455]: New session 16 of user core. Aug 13 00:08:02.646236 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:08:02.956470 sshd[4119]: Connection closed by 139.178.89.65 port 44336 Aug 13 00:08:02.957300 sshd-session[4117]: pam_unix(sshd:session): session closed for user core Aug 13 00:08:02.961976 systemd[1]: sshd@15-172.234.216.155:22-139.178.89.65:44336.service: Deactivated successfully. Aug 13 00:08:02.964576 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:08:02.965699 systemd-logind[1455]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:08:02.966851 systemd-logind[1455]: Removed session 16. Aug 13 00:08:03.025682 systemd[1]: Started sshd@16-172.234.216.155:22-139.178.89.65:44338.service - OpenSSH per-connection server daemon (139.178.89.65:44338). Aug 13 00:08:03.350266 sshd[4129]: Accepted publickey for core from 139.178.89.65 port 44338 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:08:03.352386 sshd-session[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:08:03.357651 systemd-logind[1455]: New session 17 of user core. Aug 13 00:08:03.373273 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:08:04.129358 sshd[4131]: Connection closed by 139.178.89.65 port 44338 Aug 13 00:08:04.129985 sshd-session[4129]: pam_unix(sshd:session): session closed for user core Aug 13 00:08:04.134900 systemd[1]: sshd@16-172.234.216.155:22-139.178.89.65:44338.service: Deactivated successfully. Aug 13 00:08:04.142546 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:08:04.143497 systemd-logind[1455]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:08:04.144639 systemd-logind[1455]: Removed session 17. Aug 13 00:08:04.194336 systemd[1]: Started sshd@17-172.234.216.155:22-139.178.89.65:44352.service - OpenSSH per-connection server daemon (139.178.89.65:44352). Aug 13 00:08:04.529144 sshd[4148]: Accepted publickey for core from 139.178.89.65 port 44352 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:08:04.530649 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:08:04.535557 systemd-logind[1455]: New session 18 of user core. Aug 13 00:08:04.541228 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:08:04.945505 sshd[4150]: Connection closed by 139.178.89.65 port 44352 Aug 13 00:08:04.946368 sshd-session[4148]: pam_unix(sshd:session): session closed for user core Aug 13 00:08:04.949990 systemd[1]: sshd@17-172.234.216.155:22-139.178.89.65:44352.service: Deactivated successfully. Aug 13 00:08:04.952189 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:08:04.953749 systemd-logind[1455]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:08:04.955498 systemd-logind[1455]: Removed session 18. Aug 13 00:08:05.006158 systemd[1]: Started sshd@18-172.234.216.155:22-139.178.89.65:44356.service - OpenSSH per-connection server daemon (139.178.89.65:44356). Aug 13 00:08:05.337386 sshd[4160]: Accepted publickey for core from 139.178.89.65 port 44356 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:08:05.339039 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:08:05.344175 systemd-logind[1455]: New session 19 of user core. Aug 13 00:08:05.352258 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:08:05.648282 sshd[4162]: Connection closed by 139.178.89.65 port 44356 Aug 13 00:08:05.649252 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Aug 13 00:08:05.653071 systemd-logind[1455]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:08:05.654601 systemd[1]: sshd@18-172.234.216.155:22-139.178.89.65:44356.service: Deactivated successfully. Aug 13 00:08:05.657275 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:08:05.658273 systemd-logind[1455]: Removed session 19. Aug 13 00:08:06.633659 kubelet[2606]: E0813 00:08:06.633616 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:08:08.633812 kubelet[2606]: E0813 00:08:08.633745 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:08:10.353043 kubelet[2606]: I0813 00:08:10.353011 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:08:10.353043 kubelet[2606]: I0813 00:08:10.353048 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:08:10.355354 kubelet[2606]: I0813 00:08:10.355221 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:08:10.364348 kubelet[2606]: I0813 00:08:10.364312 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:08:10.364443 kubelet[2606]: I0813 00:08:10.364399 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:08:10.364443 kubelet[2606]: E0813 00:08:10.364425 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:08:10.364443 kubelet[2606]: E0813 00:08:10.364435 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:08:10.364443 kubelet[2606]: E0813 00:08:10.364443 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:08:10.364548 kubelet[2606]: E0813 00:08:10.364451 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:08:10.364548 kubelet[2606]: E0813 00:08:10.364459 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:08:10.364548 kubelet[2606]: E0813 00:08:10.364466 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:08:10.364548 kubelet[2606]: E0813 00:08:10.364475 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:08:10.364548 kubelet[2606]: E0813 00:08:10.364483 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:08:10.364548 kubelet[2606]: I0813 00:08:10.364492 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:08:10.634406 kubelet[2606]: E0813 00:08:10.633974 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:08:10.715330 systemd[1]: Started sshd@19-172.234.216.155:22-139.178.89.65:52128.service - OpenSSH per-connection server daemon (139.178.89.65:52128). Aug 13 00:08:11.039315 sshd[4177]: Accepted publickey for core from 139.178.89.65 port 52128 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:08:11.040700 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:08:11.044770 systemd-logind[1455]: New session 20 of user core. Aug 13 00:08:11.050246 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:08:11.335099 sshd[4179]: Connection closed by 139.178.89.65 port 52128 Aug 13 00:08:11.335891 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Aug 13 00:08:11.339788 systemd-logind[1455]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:08:11.340614 systemd[1]: sshd@19-172.234.216.155:22-139.178.89.65:52128.service: Deactivated successfully. Aug 13 00:08:11.343023 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:08:11.343814 systemd-logind[1455]: Removed session 20. Aug 13 00:08:16.402355 systemd[1]: Started sshd@20-172.234.216.155:22-139.178.89.65:52138.service - OpenSSH per-connection server daemon (139.178.89.65:52138). Aug 13 00:08:16.734439 sshd[4191]: Accepted publickey for core from 139.178.89.65 port 52138 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:08:16.735888 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:08:16.740547 systemd-logind[1455]: New session 21 of user core. Aug 13 00:08:16.747246 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:08:17.038118 sshd[4193]: Connection closed by 139.178.89.65 port 52138 Aug 13 00:08:17.038706 sshd-session[4191]: pam_unix(sshd:session): session closed for user core Aug 13 00:08:17.043094 systemd[1]: sshd@20-172.234.216.155:22-139.178.89.65:52138.service: Deactivated successfully. Aug 13 00:08:17.045461 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:08:17.047690 systemd-logind[1455]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:08:17.048744 systemd-logind[1455]: Removed session 21. Aug 13 00:08:20.380513 kubelet[2606]: I0813 00:08:20.380476 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:08:20.380513 kubelet[2606]: I0813 00:08:20.380515 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:08:20.382543 kubelet[2606]: I0813 00:08:20.382266 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:08:20.393853 kubelet[2606]: I0813 00:08:20.393832 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:08:20.393957 kubelet[2606]: I0813 00:08:20.393936 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:08:20.394010 kubelet[2606]: E0813 00:08:20.393968 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:08:20.394010 kubelet[2606]: E0813 00:08:20.393980 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:08:20.394010 kubelet[2606]: E0813 00:08:20.393988 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:08:20.394010 kubelet[2606]: E0813 00:08:20.393997 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:08:20.394010 kubelet[2606]: E0813 00:08:20.394005 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:08:20.394010 kubelet[2606]: E0813 00:08:20.394013 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:08:20.394146 kubelet[2606]: E0813 00:08:20.394021 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:08:20.394146 kubelet[2606]: E0813 00:08:20.394029 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:08:20.394146 kubelet[2606]: I0813 00:08:20.394038 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:08:21.635415 kubelet[2606]: E0813 00:08:21.633898 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:08:21.636374 kubelet[2606]: E0813 00:08:21.636224 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:08:22.106626 systemd[1]: Started sshd@21-172.234.216.155:22-139.178.89.65:40664.service - OpenSSH per-connection server daemon (139.178.89.65:40664). Aug 13 00:08:22.441733 sshd[4205]: Accepted publickey for core from 139.178.89.65 port 40664 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:08:22.443525 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:08:22.450732 systemd-logind[1455]: New session 22 of user core. Aug 13 00:08:22.460260 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:08:22.740524 sshd[4207]: Connection closed by 139.178.89.65 port 40664 Aug 13 00:08:22.741411 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Aug 13 00:08:22.746076 systemd[1]: sshd@21-172.234.216.155:22-139.178.89.65:40664.service: Deactivated successfully. Aug 13 00:08:22.749056 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:08:22.750375 systemd-logind[1455]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:08:22.751644 systemd-logind[1455]: Removed session 22. Aug 13 00:08:27.809363 systemd[1]: Started sshd@22-172.234.216.155:22-139.178.89.65:40668.service - OpenSSH per-connection server daemon (139.178.89.65:40668). Aug 13 00:08:28.134232 sshd[4221]: Accepted publickey for core from 139.178.89.65 port 40668 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:08:28.135788 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:08:28.140469 systemd-logind[1455]: New session 23 of user core. Aug 13 00:08:28.149254 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:08:28.427904 sshd[4223]: Connection closed by 139.178.89.65 port 40668 Aug 13 00:08:28.428728 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Aug 13 00:08:28.432352 systemd[1]: sshd@22-172.234.216.155:22-139.178.89.65:40668.service: Deactivated successfully. Aug 13 00:08:28.434309 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:08:28.435055 systemd-logind[1455]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:08:28.436009 systemd-logind[1455]: Removed session 23. Aug 13 00:08:28.633867 kubelet[2606]: E0813 00:08:28.633798 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:08:29.634731 kubelet[2606]: E0813 00:08:29.634087 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:08:30.409896 kubelet[2606]: I0813 00:08:30.409859 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:08:30.409896 kubelet[2606]: I0813 00:08:30.409899 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:08:30.412673 kubelet[2606]: I0813 00:08:30.412644 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:08:30.423899 kubelet[2606]: I0813 00:08:30.423879 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:08:30.424000 kubelet[2606]: I0813 00:08:30.423984 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-dssvx","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:08:30.424034 kubelet[2606]: E0813 00:08:30.424013 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:08:30.424034 kubelet[2606]: E0813 00:08:30.424024 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:08:30.424034 kubelet[2606]: E0813 00:08:30.424033 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:08:30.424094 kubelet[2606]: E0813 00:08:30.424041 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:08:30.424094 kubelet[2606]: E0813 00:08:30.424049 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:08:30.424094 kubelet[2606]: E0813 00:08:30.424058 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:08:30.424094 kubelet[2606]: E0813 00:08:30.424066 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:08:30.424094 kubelet[2606]: E0813 00:08:30.424073 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:08:30.424094 kubelet[2606]: I0813 00:08:30.424082 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:08:33.490325 systemd[1]: Started sshd@23-172.234.216.155:22-139.178.89.65:46068.service - OpenSSH per-connection server daemon (139.178.89.65:46068). Aug 13 00:08:33.810179 sshd[4237]: Accepted publickey for core from 139.178.89.65 port 46068 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:08:33.811670 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:08:33.816209 systemd-logind[1455]: New session 24 of user core. Aug 13 00:08:33.824232 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 00:08:34.101864 sshd[4239]: Connection closed by 139.178.89.65 port 46068 Aug 13 00:08:34.102621 sshd-session[4237]: pam_unix(sshd:session): session closed for user core Aug 13 00:08:34.106674 systemd-logind[1455]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:08:34.107438 systemd[1]: sshd@23-172.234.216.155:22-139.178.89.65:46068.service: Deactivated successfully. Aug 13 00:08:34.109782 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:08:34.110913 systemd-logind[1455]: Removed session 24. Aug 13 00:08:39.165432 systemd[1]: Started sshd@24-172.234.216.155:22-139.178.89.65:38338.service - OpenSSH per-connection server daemon (139.178.89.65:38338). Aug 13 00:08:39.500320 sshd[4253]: Accepted publickey for core from 139.178.89.65 port 38338 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:08:39.501889 sshd-session[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:08:39.507526 systemd-logind[1455]: New session 25 of user core. Aug 13 00:08:39.515483 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 00:08:39.807224 sshd[4255]: Connection closed by 139.178.89.65 port 38338 Aug 13 00:08:39.807844 sshd-session[4253]: pam_unix(sshd:session): session closed for user core Aug 13 00:08:39.812507 systemd[1]: sshd@24-172.234.216.155:22-139.178.89.65:38338.service: Deactivated successfully. Aug 13 00:08:39.814909 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:08:39.816768 systemd-logind[1455]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:08:39.817707 systemd-logind[1455]: Removed session 25. Aug 13 00:08:40.441249 kubelet[2606]: I0813 00:08:40.441219 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:08:40.441249 kubelet[2606]: I0813 00:08:40.441258 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:08:40.444481 kubelet[2606]: I0813 00:08:40.444460 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:08:40.455891 kubelet[2606]: I0813 00:08:40.455865 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:08:40.455989 kubelet[2606]: I0813 00:08:40.455964 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:08:40.456030 kubelet[2606]: E0813 00:08:40.455998 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:08:40.456030 kubelet[2606]: E0813 00:08:40.456010 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:08:40.456030 kubelet[2606]: E0813 00:08:40.456019 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:08:40.456030 kubelet[2606]: E0813 00:08:40.456029 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:08:40.456301 kubelet[2606]: E0813 00:08:40.456037 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:08:40.456301 kubelet[2606]: E0813 00:08:40.456047 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:08:40.456301 kubelet[2606]: E0813 00:08:40.456054 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:08:40.456301 kubelet[2606]: E0813 00:08:40.456062 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:08:40.456301 kubelet[2606]: I0813 00:08:40.456071 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:08:44.874321 systemd[1]: Started sshd@25-172.234.216.155:22-139.178.89.65:38346.service - OpenSSH per-connection server daemon (139.178.89.65:38346). Aug 13 00:08:45.208434 sshd[4267]: Accepted publickey for core from 139.178.89.65 port 38346 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:08:45.209612 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:08:45.213396 systemd-logind[1455]: New session 26 of user core. Aug 13 00:08:45.221231 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 00:08:45.506428 sshd[4269]: Connection closed by 139.178.89.65 port 38346 Aug 13 00:08:45.507262 sshd-session[4267]: pam_unix(sshd:session): session closed for user core Aug 13 00:08:45.511101 systemd[1]: sshd@25-172.234.216.155:22-139.178.89.65:38346.service: Deactivated successfully. Aug 13 00:08:45.512976 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:08:45.513675 systemd-logind[1455]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:08:45.514588 systemd-logind[1455]: Removed session 26. Aug 13 00:08:50.470809 kubelet[2606]: I0813 00:08:50.470774 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:08:50.470809 kubelet[2606]: I0813 00:08:50.470811 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:08:50.473796 kubelet[2606]: I0813 00:08:50.473774 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:08:50.483067 kubelet[2606]: I0813 00:08:50.483045 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:08:50.483214 kubelet[2606]: I0813 00:08:50.483193 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:08:50.483245 kubelet[2606]: E0813 00:08:50.483224 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:08:50.483245 kubelet[2606]: E0813 00:08:50.483236 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:08:50.483245 kubelet[2606]: E0813 00:08:50.483244 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:08:50.483309 kubelet[2606]: E0813 00:08:50.483253 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:08:50.483309 kubelet[2606]: E0813 00:08:50.483261 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:08:50.483309 kubelet[2606]: E0813 00:08:50.483269 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:08:50.483309 kubelet[2606]: E0813 00:08:50.483276 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:08:50.483309 kubelet[2606]: E0813 00:08:50.483284 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:08:50.483309 kubelet[2606]: I0813 00:08:50.483293 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:08:50.572545 systemd[1]: Started sshd@26-172.234.216.155:22-139.178.89.65:40342.service - OpenSSH per-connection server daemon (139.178.89.65:40342). Aug 13 00:08:50.898567 sshd[4281]: Accepted publickey for core from 139.178.89.65 port 40342 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:08:50.900316 sshd-session[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:08:50.904798 systemd-logind[1455]: New session 27 of user core. Aug 13 00:08:50.912413 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 00:08:51.196576 sshd[4283]: Connection closed by 139.178.89.65 port 40342 Aug 13 00:08:51.197356 sshd-session[4281]: pam_unix(sshd:session): session closed for user core Aug 13 00:08:51.200914 systemd[1]: sshd@26-172.234.216.155:22-139.178.89.65:40342.service: Deactivated successfully. Aug 13 00:08:51.202884 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:08:51.203740 systemd-logind[1455]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:08:51.204555 systemd-logind[1455]: Removed session 27. Aug 13 00:08:56.256521 systemd[1]: Started sshd@27-172.234.216.155:22-139.178.89.65:40350.service - OpenSSH per-connection server daemon (139.178.89.65:40350). Aug 13 00:08:56.580093 sshd[4295]: Accepted publickey for core from 139.178.89.65 port 40350 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:08:56.581417 sshd-session[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:08:56.585144 systemd-logind[1455]: New session 28 of user core. Aug 13 00:08:56.589234 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 00:08:56.870349 sshd[4297]: Connection closed by 139.178.89.65 port 40350 Aug 13 00:08:56.871062 sshd-session[4295]: pam_unix(sshd:session): session closed for user core Aug 13 00:08:56.873700 systemd[1]: sshd@27-172.234.216.155:22-139.178.89.65:40350.service: Deactivated successfully. Aug 13 00:08:56.875716 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 00:08:56.876896 systemd-logind[1455]: Session 28 logged out. Waiting for processes to exit. Aug 13 00:08:56.878019 systemd-logind[1455]: Removed session 28. Aug 13 00:08:59.919050 systemd[1]: Started sshd@28-172.234.216.155:22-172.236.228.198:30474.service - OpenSSH per-connection server daemon (172.236.228.198:30474). Aug 13 00:09:00.499402 kubelet[2606]: I0813 00:09:00.499375 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:09:00.499402 kubelet[2606]: I0813 00:09:00.499409 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:09:00.502314 kubelet[2606]: I0813 00:09:00.502290 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:09:00.512271 kubelet[2606]: I0813 00:09:00.512255 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:09:00.512356 kubelet[2606]: I0813 00:09:00.512341 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:09:00.512390 kubelet[2606]: E0813 00:09:00.512368 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:09:00.512390 kubelet[2606]: E0813 00:09:00.512378 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:09:00.512390 kubelet[2606]: E0813 00:09:00.512387 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:09:00.512513 kubelet[2606]: E0813 00:09:00.512396 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:09:00.512513 kubelet[2606]: E0813 00:09:00.512404 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:09:00.512513 kubelet[2606]: E0813 00:09:00.512412 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:09:00.512513 kubelet[2606]: E0813 00:09:00.512420 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:09:00.512513 kubelet[2606]: E0813 00:09:00.512428 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:09:00.512513 kubelet[2606]: I0813 00:09:00.512437 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:09:00.956668 sshd[4310]: Connection closed by 172.236.228.198 port 30474 [preauth] Aug 13 00:09:00.959080 systemd[1]: sshd@28-172.234.216.155:22-172.236.228.198:30474.service: Deactivated successfully. Aug 13 00:09:01.042588 systemd[1]: Started sshd@29-172.234.216.155:22-172.236.228.198:30490.service - OpenSSH per-connection server daemon (172.236.228.198:30490). Aug 13 00:09:01.937324 systemd[1]: Started sshd@30-172.234.216.155:22-139.178.89.65:45158.service - OpenSSH per-connection server daemon (139.178.89.65:45158). Aug 13 00:09:02.066716 sshd[4315]: Connection closed by 172.236.228.198 port 30490 [preauth] Aug 13 00:09:02.069018 systemd[1]: sshd@29-172.234.216.155:22-172.236.228.198:30490.service: Deactivated successfully. Aug 13 00:09:02.130319 systemd[1]: Started sshd@31-172.234.216.155:22-172.236.228.198:30504.service - OpenSSH per-connection server daemon (172.236.228.198:30504). Aug 13 00:09:02.257070 sshd[4318]: Accepted publickey for core from 139.178.89.65 port 45158 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:09:02.258695 sshd-session[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:09:02.263352 systemd-logind[1455]: New session 29 of user core. Aug 13 00:09:02.269251 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 13 00:09:02.550632 sshd[4325]: Connection closed by 139.178.89.65 port 45158 Aug 13 00:09:02.551232 sshd-session[4318]: pam_unix(sshd:session): session closed for user core Aug 13 00:09:02.555197 systemd[1]: sshd@30-172.234.216.155:22-139.178.89.65:45158.service: Deactivated successfully. Aug 13 00:09:02.557104 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 00:09:02.558494 systemd-logind[1455]: Session 29 logged out. Waiting for processes to exit. Aug 13 00:09:02.559369 systemd-logind[1455]: Removed session 29. Aug 13 00:09:03.050087 sshd[4323]: Connection closed by 172.236.228.198 port 30504 [preauth] Aug 13 00:09:03.051999 systemd[1]: sshd@31-172.234.216.155:22-172.236.228.198:30504.service: Deactivated successfully. Aug 13 00:09:04.633517 kubelet[2606]: E0813 00:09:04.633483 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:09:07.617477 systemd[1]: Started sshd@32-172.234.216.155:22-139.178.89.65:45170.service - OpenSSH per-connection server daemon (139.178.89.65:45170). Aug 13 00:09:07.634922 kubelet[2606]: E0813 00:09:07.634213 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:09:07.947158 sshd[4341]: Accepted publickey for core from 139.178.89.65 port 45170 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:09:07.948512 sshd-session[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:09:07.953562 systemd-logind[1455]: New session 30 of user core. Aug 13 00:09:07.962247 systemd[1]: Started session-30.scope - Session 30 of User core. Aug 13 00:09:08.244456 sshd[4344]: Connection closed by 139.178.89.65 port 45170 Aug 13 00:09:08.245697 sshd-session[4341]: pam_unix(sshd:session): session closed for user core Aug 13 00:09:08.249764 systemd[1]: sshd@32-172.234.216.155:22-139.178.89.65:45170.service: Deactivated successfully. Aug 13 00:09:08.251871 systemd[1]: session-30.scope: Deactivated successfully. Aug 13 00:09:08.252987 systemd-logind[1455]: Session 30 logged out. Waiting for processes to exit. Aug 13 00:09:08.253814 systemd-logind[1455]: Removed session 30. Aug 13 00:09:10.536229 kubelet[2606]: I0813 00:09:10.536167 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:09:10.536229 kubelet[2606]: I0813 00:09:10.536201 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:09:10.539244 kubelet[2606]: I0813 00:09:10.539212 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:09:10.555211 kubelet[2606]: I0813 00:09:10.555190 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:09:10.555310 kubelet[2606]: I0813 00:09:10.555293 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:09:10.555340 kubelet[2606]: E0813 00:09:10.555323 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:09:10.555340 kubelet[2606]: E0813 00:09:10.555336 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:09:10.555393 kubelet[2606]: E0813 00:09:10.555345 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:09:10.555393 kubelet[2606]: E0813 00:09:10.555356 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:09:10.555393 kubelet[2606]: E0813 00:09:10.555364 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:09:10.555393 kubelet[2606]: E0813 00:09:10.555372 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:09:10.555393 kubelet[2606]: E0813 00:09:10.555380 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:09:10.555393 kubelet[2606]: E0813 00:09:10.555387 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:09:10.555393 kubelet[2606]: I0813 00:09:10.555396 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:09:13.311328 systemd[1]: Started sshd@33-172.234.216.155:22-139.178.89.65:52634.service - OpenSSH per-connection server daemon (139.178.89.65:52634). Aug 13 00:09:13.652019 sshd[4355]: Accepted publickey for core from 139.178.89.65 port 52634 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:09:13.653598 sshd-session[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:09:13.658882 systemd-logind[1455]: New session 31 of user core. Aug 13 00:09:13.667242 systemd[1]: Started session-31.scope - Session 31 of User core. Aug 13 00:09:13.964801 sshd[4359]: Connection closed by 139.178.89.65 port 52634 Aug 13 00:09:13.965541 sshd-session[4355]: pam_unix(sshd:session): session closed for user core Aug 13 00:09:13.969639 systemd[1]: sshd@33-172.234.216.155:22-139.178.89.65:52634.service: Deactivated successfully. Aug 13 00:09:13.972064 systemd[1]: session-31.scope: Deactivated successfully. Aug 13 00:09:13.973220 systemd-logind[1455]: Session 31 logged out. Waiting for processes to exit. Aug 13 00:09:13.974470 systemd-logind[1455]: Removed session 31. Aug 13 00:09:19.030346 systemd[1]: Started sshd@34-172.234.216.155:22-139.178.89.65:39830.service - OpenSSH per-connection server daemon (139.178.89.65:39830). Aug 13 00:09:19.359145 sshd[4371]: Accepted publickey for core from 139.178.89.65 port 39830 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:09:19.360589 sshd-session[4371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:09:19.365741 systemd-logind[1455]: New session 32 of user core. Aug 13 00:09:19.374236 systemd[1]: Started session-32.scope - Session 32 of User core. Aug 13 00:09:19.655041 sshd[4373]: Connection closed by 139.178.89.65 port 39830 Aug 13 00:09:19.655850 sshd-session[4371]: pam_unix(sshd:session): session closed for user core Aug 13 00:09:19.660422 systemd[1]: sshd@34-172.234.216.155:22-139.178.89.65:39830.service: Deactivated successfully. Aug 13 00:09:19.662848 systemd[1]: session-32.scope: Deactivated successfully. Aug 13 00:09:19.663819 systemd-logind[1455]: Session 32 logged out. Waiting for processes to exit. Aug 13 00:09:19.664956 systemd-logind[1455]: Removed session 32. Aug 13 00:09:20.574632 kubelet[2606]: I0813 00:09:20.574594 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:09:20.575005 kubelet[2606]: I0813 00:09:20.574666 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:09:20.577815 kubelet[2606]: I0813 00:09:20.577498 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:09:20.592136 kubelet[2606]: I0813 00:09:20.590232 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:09:20.592136 kubelet[2606]: I0813 00:09:20.590717 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:09:20.592136 kubelet[2606]: E0813 00:09:20.590825 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:09:20.592136 kubelet[2606]: E0813 00:09:20.590843 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:09:20.592136 kubelet[2606]: E0813 00:09:20.590852 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:09:20.592136 kubelet[2606]: E0813 00:09:20.590867 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:09:20.592136 kubelet[2606]: E0813 00:09:20.590981 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:09:20.592136 kubelet[2606]: E0813 00:09:20.590993 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:09:20.592136 kubelet[2606]: E0813 00:09:20.591002 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:09:20.592136 kubelet[2606]: E0813 00:09:20.591011 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:09:20.592136 kubelet[2606]: I0813 00:09:20.591021 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:09:24.717359 systemd[1]: Started sshd@35-172.234.216.155:22-139.178.89.65:39846.service - OpenSSH per-connection server daemon (139.178.89.65:39846). Aug 13 00:09:25.036342 sshd[4384]: Accepted publickey for core from 139.178.89.65 port 39846 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:09:25.038027 sshd-session[4384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:09:25.043823 systemd-logind[1455]: New session 33 of user core. Aug 13 00:09:25.054239 systemd[1]: Started session-33.scope - Session 33 of User core. Aug 13 00:09:25.328081 sshd[4386]: Connection closed by 139.178.89.65 port 39846 Aug 13 00:09:25.329025 sshd-session[4384]: pam_unix(sshd:session): session closed for user core Aug 13 00:09:25.332666 systemd[1]: sshd@35-172.234.216.155:22-139.178.89.65:39846.service: Deactivated successfully. Aug 13 00:09:25.334772 systemd[1]: session-33.scope: Deactivated successfully. Aug 13 00:09:25.335678 systemd-logind[1455]: Session 33 logged out. Waiting for processes to exit. Aug 13 00:09:25.337096 systemd-logind[1455]: Removed session 33. Aug 13 00:09:30.400314 systemd[1]: Started sshd@36-172.234.216.155:22-139.178.89.65:33694.service - OpenSSH per-connection server daemon (139.178.89.65:33694). Aug 13 00:09:30.605509 kubelet[2606]: I0813 00:09:30.605487 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:09:30.605509 kubelet[2606]: I0813 00:09:30.605516 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:09:30.607397 kubelet[2606]: I0813 00:09:30.607380 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:09:30.617819 kubelet[2606]: I0813 00:09:30.617801 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:09:30.618261 kubelet[2606]: I0813 00:09:30.618243 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:09:30.618419 kubelet[2606]: E0813 00:09:30.618385 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:09:30.618419 kubelet[2606]: E0813 00:09:30.618401 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:09:30.618469 kubelet[2606]: E0813 00:09:30.618421 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:09:30.618548 kubelet[2606]: E0813 00:09:30.618537 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:09:30.618601 kubelet[2606]: E0813 00:09:30.618549 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:09:30.618601 kubelet[2606]: E0813 00:09:30.618558 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:09:30.618601 kubelet[2606]: E0813 00:09:30.618568 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:09:30.618694 kubelet[2606]: E0813 00:09:30.618678 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:09:30.618694 kubelet[2606]: I0813 00:09:30.618688 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:09:30.633268 kubelet[2606]: E0813 00:09:30.633246 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:09:30.736842 sshd[4401]: Accepted publickey for core from 139.178.89.65 port 33694 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:09:30.738252 sshd-session[4401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:09:30.743098 systemd-logind[1455]: New session 34 of user core. Aug 13 00:09:30.747270 systemd[1]: Started session-34.scope - Session 34 of User core. Aug 13 00:09:31.039785 sshd[4403]: Connection closed by 139.178.89.65 port 33694 Aug 13 00:09:31.040704 sshd-session[4401]: pam_unix(sshd:session): session closed for user core Aug 13 00:09:31.045384 systemd[1]: sshd@36-172.234.216.155:22-139.178.89.65:33694.service: Deactivated successfully. Aug 13 00:09:31.048404 systemd[1]: session-34.scope: Deactivated successfully. Aug 13 00:09:31.049944 systemd-logind[1455]: Session 34 logged out. Waiting for processes to exit. Aug 13 00:09:31.051316 systemd-logind[1455]: Removed session 34. Aug 13 00:09:36.101323 systemd[1]: Started sshd@37-172.234.216.155:22-139.178.89.65:33704.service - OpenSSH per-connection server daemon (139.178.89.65:33704). Aug 13 00:09:36.424358 sshd[4416]: Accepted publickey for core from 139.178.89.65 port 33704 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:09:36.425740 sshd-session[4416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:09:36.429801 systemd-logind[1455]: New session 35 of user core. Aug 13 00:09:36.432289 systemd[1]: Started session-35.scope - Session 35 of User core. Aug 13 00:09:36.707409 sshd[4418]: Connection closed by 139.178.89.65 port 33704 Aug 13 00:09:36.708161 sshd-session[4416]: pam_unix(sshd:session): session closed for user core Aug 13 00:09:36.712233 systemd[1]: sshd@37-172.234.216.155:22-139.178.89.65:33704.service: Deactivated successfully. Aug 13 00:09:36.714069 systemd[1]: session-35.scope: Deactivated successfully. Aug 13 00:09:36.714954 systemd-logind[1455]: Session 35 logged out. Waiting for processes to exit. Aug 13 00:09:36.715838 systemd-logind[1455]: Removed session 35. Aug 13 00:09:37.634264 kubelet[2606]: E0813 00:09:37.633458 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:09:39.634877 kubelet[2606]: E0813 00:09:39.634406 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:09:40.634310 kubelet[2606]: I0813 00:09:40.634277 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:09:40.634310 kubelet[2606]: I0813 00:09:40.634314 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:09:40.636214 kubelet[2606]: I0813 00:09:40.636190 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:09:40.645846 kubelet[2606]: I0813 00:09:40.645825 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:09:40.645959 kubelet[2606]: I0813 00:09:40.645936 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:09:40.645993 kubelet[2606]: E0813 00:09:40.645970 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:09:40.645993 kubelet[2606]: E0813 00:09:40.645981 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:09:40.645993 kubelet[2606]: E0813 00:09:40.645989 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:09:40.646054 kubelet[2606]: E0813 00:09:40.645998 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:09:40.646054 kubelet[2606]: E0813 00:09:40.646006 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:09:40.646054 kubelet[2606]: E0813 00:09:40.646015 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:09:40.646054 kubelet[2606]: E0813 00:09:40.646023 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:09:40.646054 kubelet[2606]: E0813 00:09:40.646030 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:09:40.646054 kubelet[2606]: I0813 00:09:40.646039 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:09:41.769383 systemd[1]: Started sshd@38-172.234.216.155:22-139.178.89.65:47610.service - OpenSSH per-connection server daemon (139.178.89.65:47610). Aug 13 00:09:42.094349 sshd[4431]: Accepted publickey for core from 139.178.89.65 port 47610 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:09:42.096211 sshd-session[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:09:42.101371 systemd-logind[1455]: New session 36 of user core. Aug 13 00:09:42.105252 systemd[1]: Started session-36.scope - Session 36 of User core. Aug 13 00:09:42.389884 sshd[4433]: Connection closed by 139.178.89.65 port 47610 Aug 13 00:09:42.390867 sshd-session[4431]: pam_unix(sshd:session): session closed for user core Aug 13 00:09:42.394465 systemd-logind[1455]: Session 36 logged out. Waiting for processes to exit. Aug 13 00:09:42.395292 systemd[1]: sshd@38-172.234.216.155:22-139.178.89.65:47610.service: Deactivated successfully. Aug 13 00:09:42.397473 systemd[1]: session-36.scope: Deactivated successfully. Aug 13 00:09:42.398450 systemd-logind[1455]: Removed session 36. Aug 13 00:09:42.634062 kubelet[2606]: E0813 00:09:42.633726 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:09:46.633422 kubelet[2606]: E0813 00:09:46.633389 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:09:47.460403 systemd[1]: Started sshd@39-172.234.216.155:22-139.178.89.65:47624.service - OpenSSH per-connection server daemon (139.178.89.65:47624). Aug 13 00:09:47.812639 sshd[4445]: Accepted publickey for core from 139.178.89.65 port 47624 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:09:47.814318 sshd-session[4445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:09:47.819336 systemd-logind[1455]: New session 37 of user core. Aug 13 00:09:47.822271 systemd[1]: Started session-37.scope - Session 37 of User core. Aug 13 00:09:48.121777 sshd[4447]: Connection closed by 139.178.89.65 port 47624 Aug 13 00:09:48.122876 sshd-session[4445]: pam_unix(sshd:session): session closed for user core Aug 13 00:09:48.126755 systemd-logind[1455]: Session 37 logged out. Waiting for processes to exit. Aug 13 00:09:48.127881 systemd[1]: sshd@39-172.234.216.155:22-139.178.89.65:47624.service: Deactivated successfully. Aug 13 00:09:48.130109 systemd[1]: session-37.scope: Deactivated successfully. Aug 13 00:09:48.130973 systemd-logind[1455]: Removed session 37. Aug 13 00:09:50.664368 kubelet[2606]: I0813 00:09:50.664327 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:09:50.664368 kubelet[2606]: I0813 00:09:50.664365 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:09:50.665757 kubelet[2606]: I0813 00:09:50.665741 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:09:50.676563 kubelet[2606]: I0813 00:09:50.676530 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:09:50.676676 kubelet[2606]: I0813 00:09:50.676659 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:09:50.676727 kubelet[2606]: E0813 00:09:50.676689 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:09:50.676727 kubelet[2606]: E0813 00:09:50.676700 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:09:50.676727 kubelet[2606]: E0813 00:09:50.676708 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:09:50.676727 kubelet[2606]: E0813 00:09:50.676717 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:09:50.676727 kubelet[2606]: E0813 00:09:50.676727 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:09:50.676830 kubelet[2606]: E0813 00:09:50.676741 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:09:50.676830 kubelet[2606]: E0813 00:09:50.676757 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:09:50.676830 kubelet[2606]: E0813 00:09:50.676771 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:09:50.676830 kubelet[2606]: I0813 00:09:50.676787 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:09:53.190481 systemd[1]: Started sshd@40-172.234.216.155:22-139.178.89.65:47490.service - OpenSSH per-connection server daemon (139.178.89.65:47490). Aug 13 00:09:53.524010 sshd[4459]: Accepted publickey for core from 139.178.89.65 port 47490 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:09:53.525331 sshd-session[4459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:09:53.531931 systemd-logind[1455]: New session 38 of user core. Aug 13 00:09:53.537252 systemd[1]: Started session-38.scope - Session 38 of User core. Aug 13 00:09:53.828916 sshd[4461]: Connection closed by 139.178.89.65 port 47490 Aug 13 00:09:53.829477 sshd-session[4459]: pam_unix(sshd:session): session closed for user core Aug 13 00:09:53.833470 systemd-logind[1455]: Session 38 logged out. Waiting for processes to exit. Aug 13 00:09:53.834583 systemd[1]: sshd@40-172.234.216.155:22-139.178.89.65:47490.service: Deactivated successfully. Aug 13 00:09:53.837266 systemd[1]: session-38.scope: Deactivated successfully. Aug 13 00:09:53.838461 systemd-logind[1455]: Removed session 38. Aug 13 00:09:58.633666 kubelet[2606]: E0813 00:09:58.633628 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:09:58.897463 systemd[1]: Started sshd@41-172.234.216.155:22-139.178.89.65:47496.service - OpenSSH per-connection server daemon (139.178.89.65:47496). Aug 13 00:09:59.223706 sshd[4474]: Accepted publickey for core from 139.178.89.65 port 47496 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:09:59.225587 sshd-session[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:09:59.230711 systemd-logind[1455]: New session 39 of user core. Aug 13 00:09:59.236378 systemd[1]: Started session-39.scope - Session 39 of User core. Aug 13 00:09:59.522372 sshd[4476]: Connection closed by 139.178.89.65 port 47496 Aug 13 00:09:59.523180 sshd-session[4474]: pam_unix(sshd:session): session closed for user core Aug 13 00:09:59.529092 systemd-logind[1455]: Session 39 logged out. Waiting for processes to exit. Aug 13 00:09:59.530002 systemd[1]: sshd@41-172.234.216.155:22-139.178.89.65:47496.service: Deactivated successfully. Aug 13 00:09:59.532611 systemd[1]: session-39.scope: Deactivated successfully. Aug 13 00:09:59.533721 systemd-logind[1455]: Removed session 39. Aug 13 00:10:00.692357 kubelet[2606]: I0813 00:10:00.692325 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:10:00.692357 kubelet[2606]: I0813 00:10:00.692362 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:10:00.693485 kubelet[2606]: I0813 00:10:00.693469 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:10:00.702061 kubelet[2606]: I0813 00:10:00.702037 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:10:00.702190 kubelet[2606]: I0813 00:10:00.702164 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:10:00.702236 kubelet[2606]: E0813 00:10:00.702195 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:10:00.702236 kubelet[2606]: E0813 00:10:00.702206 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:10:00.702236 kubelet[2606]: E0813 00:10:00.702214 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:10:00.702236 kubelet[2606]: E0813 00:10:00.702223 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:10:00.702236 kubelet[2606]: E0813 00:10:00.702230 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:10:00.702236 kubelet[2606]: E0813 00:10:00.702238 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:10:00.702396 kubelet[2606]: E0813 00:10:00.702245 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:10:00.702396 kubelet[2606]: E0813 00:10:00.702254 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:10:00.702396 kubelet[2606]: I0813 00:10:00.702263 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:10:04.590523 systemd[1]: Started sshd@42-172.234.216.155:22-139.178.89.65:40696.service - OpenSSH per-connection server daemon (139.178.89.65:40696). Aug 13 00:10:04.916695 sshd[4488]: Accepted publickey for core from 139.178.89.65 port 40696 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:10:04.918344 sshd-session[4488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:10:04.923072 systemd-logind[1455]: New session 40 of user core. Aug 13 00:10:04.930250 systemd[1]: Started session-40.scope - Session 40 of User core. Aug 13 00:10:05.218598 sshd[4490]: Connection closed by 139.178.89.65 port 40696 Aug 13 00:10:05.219399 sshd-session[4488]: pam_unix(sshd:session): session closed for user core Aug 13 00:10:05.224585 systemd[1]: sshd@42-172.234.216.155:22-139.178.89.65:40696.service: Deactivated successfully. Aug 13 00:10:05.227046 systemd[1]: session-40.scope: Deactivated successfully. Aug 13 00:10:05.228044 systemd-logind[1455]: Session 40 logged out. Waiting for processes to exit. Aug 13 00:10:05.229156 systemd-logind[1455]: Removed session 40. Aug 13 00:10:10.291628 systemd[1]: Started sshd@43-172.234.216.155:22-139.178.89.65:40768.service - OpenSSH per-connection server daemon (139.178.89.65:40768). Aug 13 00:10:10.622569 sshd[4505]: Accepted publickey for core from 139.178.89.65 port 40768 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:10:10.623825 sshd-session[4505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:10:10.628832 systemd-logind[1455]: New session 41 of user core. Aug 13 00:10:10.634257 systemd[1]: Started session-41.scope - Session 41 of User core. Aug 13 00:10:10.721321 kubelet[2606]: I0813 00:10:10.721289 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:10:10.721926 kubelet[2606]: I0813 00:10:10.721329 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:10:10.723090 kubelet[2606]: I0813 00:10:10.723069 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:10:10.735529 kubelet[2606]: I0813 00:10:10.735499 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:10:10.735648 kubelet[2606]: I0813 00:10:10.735630 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-dssvx","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:10:10.735689 kubelet[2606]: E0813 00:10:10.735660 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:10:10.735689 kubelet[2606]: E0813 00:10:10.735671 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:10:10.735689 kubelet[2606]: E0813 00:10:10.735679 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:10:10.735689 kubelet[2606]: E0813 00:10:10.735687 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:10:10.735782 kubelet[2606]: E0813 00:10:10.735695 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:10:10.735782 kubelet[2606]: E0813 00:10:10.735704 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:10:10.735782 kubelet[2606]: E0813 00:10:10.735712 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:10:10.735782 kubelet[2606]: E0813 00:10:10.735719 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:10:10.735782 kubelet[2606]: I0813 00:10:10.735729 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:10:10.926941 sshd[4507]: Connection closed by 139.178.89.65 port 40768 Aug 13 00:10:10.928866 sshd-session[4505]: pam_unix(sshd:session): session closed for user core Aug 13 00:10:10.935828 systemd[1]: sshd@43-172.234.216.155:22-139.178.89.65:40768.service: Deactivated successfully. Aug 13 00:10:10.938032 systemd[1]: session-41.scope: Deactivated successfully. Aug 13 00:10:10.938980 systemd-logind[1455]: Session 41 logged out. Waiting for processes to exit. Aug 13 00:10:10.940108 systemd-logind[1455]: Removed session 41. Aug 13 00:10:14.633330 kubelet[2606]: E0813 00:10:14.633291 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:10:15.991348 systemd[1]: Started sshd@44-172.234.216.155:22-139.178.89.65:40782.service - OpenSSH per-connection server daemon (139.178.89.65:40782). Aug 13 00:10:16.329971 sshd[4519]: Accepted publickey for core from 139.178.89.65 port 40782 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:10:16.331705 sshd-session[4519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:10:16.337502 systemd-logind[1455]: New session 42 of user core. Aug 13 00:10:16.347249 systemd[1]: Started session-42.scope - Session 42 of User core. Aug 13 00:10:16.631582 sshd[4521]: Connection closed by 139.178.89.65 port 40782 Aug 13 00:10:16.632380 sshd-session[4519]: pam_unix(sshd:session): session closed for user core Aug 13 00:10:16.636838 systemd[1]: sshd@44-172.234.216.155:22-139.178.89.65:40782.service: Deactivated successfully. Aug 13 00:10:16.639474 systemd[1]: session-42.scope: Deactivated successfully. Aug 13 00:10:16.640392 systemd-logind[1455]: Session 42 logged out. Waiting for processes to exit. Aug 13 00:10:16.641554 systemd-logind[1455]: Removed session 42. Aug 13 00:10:19.635761 kubelet[2606]: E0813 00:10:19.635680 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:10:20.755006 kubelet[2606]: I0813 00:10:20.754972 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:10:20.755006 kubelet[2606]: I0813 00:10:20.755010 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:10:20.756739 kubelet[2606]: I0813 00:10:20.756722 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:10:20.772864 kubelet[2606]: I0813 00:10:20.772834 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:10:20.772995 kubelet[2606]: I0813 00:10:20.772972 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-dssvx","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:10:20.773048 kubelet[2606]: E0813 00:10:20.773003 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:10:20.773048 kubelet[2606]: E0813 00:10:20.773016 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:10:20.773048 kubelet[2606]: E0813 00:10:20.773025 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:10:20.773048 kubelet[2606]: E0813 00:10:20.773034 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:10:20.773048 kubelet[2606]: E0813 00:10:20.773042 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:10:20.773048 kubelet[2606]: E0813 00:10:20.773051 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:10:20.773198 kubelet[2606]: E0813 00:10:20.773059 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:10:20.773198 kubelet[2606]: E0813 00:10:20.773067 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:10:20.773198 kubelet[2606]: I0813 00:10:20.773077 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:10:21.698359 systemd[1]: Started sshd@45-172.234.216.155:22-139.178.89.65:49734.service - OpenSSH per-connection server daemon (139.178.89.65:49734). Aug 13 00:10:22.036058 sshd[4532]: Accepted publickey for core from 139.178.89.65 port 49734 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:10:22.038085 sshd-session[4532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:10:22.042312 systemd-logind[1455]: New session 43 of user core. Aug 13 00:10:22.048255 systemd[1]: Started session-43.scope - Session 43 of User core. Aug 13 00:10:22.334377 sshd[4534]: Connection closed by 139.178.89.65 port 49734 Aug 13 00:10:22.335165 sshd-session[4532]: pam_unix(sshd:session): session closed for user core Aug 13 00:10:22.338237 systemd[1]: sshd@45-172.234.216.155:22-139.178.89.65:49734.service: Deactivated successfully. Aug 13 00:10:22.340422 systemd[1]: session-43.scope: Deactivated successfully. Aug 13 00:10:22.341667 systemd-logind[1455]: Session 43 logged out. Waiting for processes to exit. Aug 13 00:10:22.342948 systemd-logind[1455]: Removed session 43. Aug 13 00:10:27.396339 systemd[1]: Started sshd@46-172.234.216.155:22-139.178.89.65:49740.service - OpenSSH per-connection server daemon (139.178.89.65:49740). Aug 13 00:10:27.718777 sshd[4546]: Accepted publickey for core from 139.178.89.65 port 49740 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:10:27.720332 sshd-session[4546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:10:27.726108 systemd-logind[1455]: New session 44 of user core. Aug 13 00:10:27.733432 systemd[1]: Started session-44.scope - Session 44 of User core. Aug 13 00:10:28.011307 sshd[4548]: Connection closed by 139.178.89.65 port 49740 Aug 13 00:10:28.012308 sshd-session[4546]: pam_unix(sshd:session): session closed for user core Aug 13 00:10:28.015465 systemd[1]: sshd@46-172.234.216.155:22-139.178.89.65:49740.service: Deactivated successfully. Aug 13 00:10:28.017714 systemd[1]: session-44.scope: Deactivated successfully. Aug 13 00:10:28.018977 systemd-logind[1455]: Session 44 logged out. Waiting for processes to exit. Aug 13 00:10:28.020044 systemd-logind[1455]: Removed session 44. Aug 13 00:10:29.631183 kubelet[2606]: I0813 00:10:29.631116 2606 image_gc_manager.go:383] "Disk usage on image filesystem is over the high threshold, trying to free bytes down to the low threshold" usage=88 highThreshold=85 amountToFree=155928985 lowThreshold=80 Aug 13 00:10:29.631183 kubelet[2606]: E0813 00:10:29.631174 2606 kubelet.go:1551] "Image garbage collection failed multiple times in a row" err="Failed to garbage collect required amount of images. Attempted to free 155928985 bytes, but only found 0 bytes eligible to free." Aug 13 00:10:30.792254 kubelet[2606]: I0813 00:10:30.792178 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:10:30.792254 kubelet[2606]: I0813 00:10:30.792211 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:10:30.793656 kubelet[2606]: I0813 00:10:30.793636 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:10:30.803208 kubelet[2606]: I0813 00:10:30.803188 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:10:30.803307 kubelet[2606]: I0813 00:10:30.803291 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:10:30.803514 kubelet[2606]: E0813 00:10:30.803319 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:10:30.803514 kubelet[2606]: E0813 00:10:30.803330 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:10:30.803514 kubelet[2606]: E0813 00:10:30.803338 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:10:30.803514 kubelet[2606]: E0813 00:10:30.803348 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:10:30.803514 kubelet[2606]: E0813 00:10:30.803511 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:10:30.803608 kubelet[2606]: E0813 00:10:30.803518 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:10:30.803608 kubelet[2606]: E0813 00:10:30.803526 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:10:30.803608 kubelet[2606]: E0813 00:10:30.803534 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:10:30.803608 kubelet[2606]: I0813 00:10:30.803542 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:10:33.080340 systemd[1]: Started sshd@47-172.234.216.155:22-139.178.89.65:58420.service - OpenSSH per-connection server daemon (139.178.89.65:58420). Aug 13 00:10:33.416094 sshd[4562]: Accepted publickey for core from 139.178.89.65 port 58420 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:10:33.417829 sshd-session[4562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:10:33.422658 systemd-logind[1455]: New session 45 of user core. Aug 13 00:10:33.426353 systemd[1]: Started session-45.scope - Session 45 of User core. Aug 13 00:10:33.720977 sshd[4564]: Connection closed by 139.178.89.65 port 58420 Aug 13 00:10:33.721872 sshd-session[4562]: pam_unix(sshd:session): session closed for user core Aug 13 00:10:33.725734 systemd-logind[1455]: Session 45 logged out. Waiting for processes to exit. Aug 13 00:10:33.726596 systemd[1]: sshd@47-172.234.216.155:22-139.178.89.65:58420.service: Deactivated successfully. Aug 13 00:10:33.728859 systemd[1]: session-45.scope: Deactivated successfully. Aug 13 00:10:33.729813 systemd-logind[1455]: Removed session 45. Aug 13 00:10:38.782312 systemd[1]: Started sshd@48-172.234.216.155:22-139.178.89.65:58432.service - OpenSSH per-connection server daemon (139.178.89.65:58432). Aug 13 00:10:39.099723 sshd[4578]: Accepted publickey for core from 139.178.89.65 port 58432 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:10:39.101259 sshd-session[4578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:10:39.106564 systemd-logind[1455]: New session 46 of user core. Aug 13 00:10:39.112234 systemd[1]: Started session-46.scope - Session 46 of User core. Aug 13 00:10:39.399272 sshd[4580]: Connection closed by 139.178.89.65 port 58432 Aug 13 00:10:39.400327 sshd-session[4578]: pam_unix(sshd:session): session closed for user core Aug 13 00:10:39.404422 systemd[1]: sshd@48-172.234.216.155:22-139.178.89.65:58432.service: Deactivated successfully. Aug 13 00:10:39.407452 systemd[1]: session-46.scope: Deactivated successfully. Aug 13 00:10:39.408550 systemd-logind[1455]: Session 46 logged out. Waiting for processes to exit. Aug 13 00:10:39.409883 systemd-logind[1455]: Removed session 46. Aug 13 00:10:40.821277 kubelet[2606]: I0813 00:10:40.821233 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:10:40.821277 kubelet[2606]: I0813 00:10:40.821273 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:10:40.823069 kubelet[2606]: I0813 00:10:40.823043 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:10:40.834738 kubelet[2606]: I0813 00:10:40.834706 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:10:40.834838 kubelet[2606]: I0813 00:10:40.834804 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:10:40.834838 kubelet[2606]: E0813 00:10:40.834832 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:10:40.834898 kubelet[2606]: E0813 00:10:40.834846 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:10:40.834898 kubelet[2606]: E0813 00:10:40.834855 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:10:40.834898 kubelet[2606]: E0813 00:10:40.834864 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:10:40.834898 kubelet[2606]: E0813 00:10:40.834872 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:10:40.834898 kubelet[2606]: E0813 00:10:40.834880 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:10:40.834898 kubelet[2606]: E0813 00:10:40.834889 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:10:40.834898 kubelet[2606]: E0813 00:10:40.834896 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:10:40.835058 kubelet[2606]: I0813 00:10:40.834907 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:10:44.461617 systemd[1]: Started sshd@49-172.234.216.155:22-139.178.89.65:39852.service - OpenSSH per-connection server daemon (139.178.89.65:39852). Aug 13 00:10:44.790193 sshd[4592]: Accepted publickey for core from 139.178.89.65 port 39852 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:10:44.791680 sshd-session[4592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:10:44.796211 systemd-logind[1455]: New session 47 of user core. Aug 13 00:10:44.802271 systemd[1]: Started session-47.scope - Session 47 of User core. Aug 13 00:10:45.093694 sshd[4594]: Connection closed by 139.178.89.65 port 39852 Aug 13 00:10:45.094458 sshd-session[4592]: pam_unix(sshd:session): session closed for user core Aug 13 00:10:45.098993 systemd-logind[1455]: Session 47 logged out. Waiting for processes to exit. Aug 13 00:10:45.099955 systemd[1]: sshd@49-172.234.216.155:22-139.178.89.65:39852.service: Deactivated successfully. Aug 13 00:10:45.102060 systemd[1]: session-47.scope: Deactivated successfully. Aug 13 00:10:45.103441 systemd-logind[1455]: Removed session 47. Aug 13 00:10:50.159502 systemd[1]: Started sshd@50-172.234.216.155:22-139.178.89.65:38940.service - OpenSSH per-connection server daemon (139.178.89.65:38940). Aug 13 00:10:50.477926 sshd[4606]: Accepted publickey for core from 139.178.89.65 port 38940 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:10:50.479288 sshd-session[4606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:10:50.483470 systemd-logind[1455]: New session 48 of user core. Aug 13 00:10:50.488254 systemd[1]: Started session-48.scope - Session 48 of User core. Aug 13 00:10:50.771896 sshd[4608]: Connection closed by 139.178.89.65 port 38940 Aug 13 00:10:50.772993 sshd-session[4606]: pam_unix(sshd:session): session closed for user core Aug 13 00:10:50.777467 systemd[1]: sshd@50-172.234.216.155:22-139.178.89.65:38940.service: Deactivated successfully. Aug 13 00:10:50.780240 systemd[1]: session-48.scope: Deactivated successfully. Aug 13 00:10:50.781215 systemd-logind[1455]: Session 48 logged out. Waiting for processes to exit. Aug 13 00:10:50.782413 systemd-logind[1455]: Removed session 48. Aug 13 00:10:50.851713 kubelet[2606]: I0813 00:10:50.851663 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:10:50.851713 kubelet[2606]: I0813 00:10:50.851711 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:10:50.853032 kubelet[2606]: I0813 00:10:50.853014 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:10:50.865480 kubelet[2606]: I0813 00:10:50.865456 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:10:50.865589 kubelet[2606]: I0813 00:10:50.865569 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-dssvx","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:10:50.865660 kubelet[2606]: E0813 00:10:50.865597 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:10:50.865660 kubelet[2606]: E0813 00:10:50.865609 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:10:50.865660 kubelet[2606]: E0813 00:10:50.865617 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:10:50.865660 kubelet[2606]: E0813 00:10:50.865626 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:10:50.865660 kubelet[2606]: E0813 00:10:50.865635 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:10:50.865660 kubelet[2606]: E0813 00:10:50.865643 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:10:50.865660 kubelet[2606]: E0813 00:10:50.865650 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:10:50.865660 kubelet[2606]: E0813 00:10:50.865658 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:10:50.865660 kubelet[2606]: I0813 00:10:50.865667 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:10:54.633777 kubelet[2606]: E0813 00:10:54.633690 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:10:55.837354 systemd[1]: Started sshd@51-172.234.216.155:22-139.178.89.65:38946.service - OpenSSH per-connection server daemon (139.178.89.65:38946). Aug 13 00:10:56.160447 sshd[4620]: Accepted publickey for core from 139.178.89.65 port 38946 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:10:56.161943 sshd-session[4620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:10:56.167611 systemd-logind[1455]: New session 49 of user core. Aug 13 00:10:56.174323 systemd[1]: Started session-49.scope - Session 49 of User core. Aug 13 00:10:56.454191 sshd[4622]: Connection closed by 139.178.89.65 port 38946 Aug 13 00:10:56.454950 sshd-session[4620]: pam_unix(sshd:session): session closed for user core Aug 13 00:10:56.457812 systemd[1]: sshd@51-172.234.216.155:22-139.178.89.65:38946.service: Deactivated successfully. Aug 13 00:10:56.460081 systemd[1]: session-49.scope: Deactivated successfully. Aug 13 00:10:56.461784 systemd-logind[1455]: Session 49 logged out. Waiting for processes to exit. Aug 13 00:10:56.463718 systemd-logind[1455]: Removed session 49. Aug 13 00:11:00.880268 kubelet[2606]: I0813 00:11:00.880175 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:11:00.880268 kubelet[2606]: I0813 00:11:00.880210 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:11:00.881995 kubelet[2606]: I0813 00:11:00.881692 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:11:00.890496 kubelet[2606]: I0813 00:11:00.890481 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:11:00.890578 kubelet[2606]: I0813 00:11:00.890560 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:11:00.890616 kubelet[2606]: E0813 00:11:00.890592 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:11:00.890616 kubelet[2606]: E0813 00:11:00.890603 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:11:00.890616 kubelet[2606]: E0813 00:11:00.890611 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:11:00.890693 kubelet[2606]: E0813 00:11:00.890620 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:11:00.890693 kubelet[2606]: E0813 00:11:00.890629 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:11:00.890693 kubelet[2606]: E0813 00:11:00.890637 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:11:00.890693 kubelet[2606]: E0813 00:11:00.890644 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:11:00.890693 kubelet[2606]: E0813 00:11:00.890652 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:11:00.890693 kubelet[2606]: I0813 00:11:00.890661 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:11:01.524346 systemd[1]: Started sshd@52-172.234.216.155:22-139.178.89.65:54780.service - OpenSSH per-connection server daemon (139.178.89.65:54780). Aug 13 00:11:01.855916 sshd[4634]: Accepted publickey for core from 139.178.89.65 port 54780 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:11:01.857779 sshd-session[4634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:11:01.862643 systemd-logind[1455]: New session 50 of user core. Aug 13 00:11:01.866248 systemd[1]: Started session-50.scope - Session 50 of User core. Aug 13 00:11:02.160385 sshd[4636]: Connection closed by 139.178.89.65 port 54780 Aug 13 00:11:02.161215 sshd-session[4634]: pam_unix(sshd:session): session closed for user core Aug 13 00:11:02.165081 systemd-logind[1455]: Session 50 logged out. Waiting for processes to exit. Aug 13 00:11:02.165859 systemd[1]: sshd@52-172.234.216.155:22-139.178.89.65:54780.service: Deactivated successfully. Aug 13 00:11:02.168143 systemd[1]: session-50.scope: Deactivated successfully. Aug 13 00:11:02.169457 systemd-logind[1455]: Removed session 50. Aug 13 00:11:02.633890 kubelet[2606]: E0813 00:11:02.633866 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:11:04.634502 kubelet[2606]: E0813 00:11:04.634456 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:11:04.634891 kubelet[2606]: E0813 00:11:04.634471 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:11:07.222339 systemd[1]: Started sshd@53-172.234.216.155:22-139.178.89.65:54796.service - OpenSSH per-connection server daemon (139.178.89.65:54796). Aug 13 00:11:07.540164 sshd[4650]: Accepted publickey for core from 139.178.89.65 port 54796 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:11:07.541935 sshd-session[4650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:11:07.546557 systemd-logind[1455]: New session 51 of user core. Aug 13 00:11:07.551334 systemd[1]: Started session-51.scope - Session 51 of User core. Aug 13 00:11:07.839819 sshd[4652]: Connection closed by 139.178.89.65 port 54796 Aug 13 00:11:07.840607 sshd-session[4650]: pam_unix(sshd:session): session closed for user core Aug 13 00:11:07.844440 systemd[1]: sshd@53-172.234.216.155:22-139.178.89.65:54796.service: Deactivated successfully. Aug 13 00:11:07.847003 systemd[1]: session-51.scope: Deactivated successfully. Aug 13 00:11:07.849710 systemd-logind[1455]: Session 51 logged out. Waiting for processes to exit. Aug 13 00:11:07.851345 systemd-logind[1455]: Removed session 51. Aug 13 00:11:10.908050 kubelet[2606]: I0813 00:11:10.908014 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:11:10.908050 kubelet[2606]: I0813 00:11:10.908055 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:11:10.909548 kubelet[2606]: I0813 00:11:10.909521 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:11:10.920375 kubelet[2606]: I0813 00:11:10.920355 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:11:10.920496 kubelet[2606]: I0813 00:11:10.920476 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:11:10.920537 kubelet[2606]: E0813 00:11:10.920510 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:11:10.920537 kubelet[2606]: E0813 00:11:10.920522 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:11:10.920537 kubelet[2606]: E0813 00:11:10.920530 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:11:10.920537 kubelet[2606]: E0813 00:11:10.920539 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:11:10.920638 kubelet[2606]: E0813 00:11:10.920548 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:11:10.920638 kubelet[2606]: E0813 00:11:10.920556 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:11:10.920638 kubelet[2606]: E0813 00:11:10.920564 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:11:10.920638 kubelet[2606]: E0813 00:11:10.920577 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:11:10.920638 kubelet[2606]: I0813 00:11:10.920587 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:11:12.907727 systemd[1]: Started sshd@54-172.234.216.155:22-139.178.89.65:46208.service - OpenSSH per-connection server daemon (139.178.89.65:46208). Aug 13 00:11:13.228627 sshd[4665]: Accepted publickey for core from 139.178.89.65 port 46208 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:11:13.230190 sshd-session[4665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:11:13.234380 systemd-logind[1455]: New session 52 of user core. Aug 13 00:11:13.238291 systemd[1]: Started session-52.scope - Session 52 of User core. Aug 13 00:11:13.520861 sshd[4667]: Connection closed by 139.178.89.65 port 46208 Aug 13 00:11:13.521693 sshd-session[4665]: pam_unix(sshd:session): session closed for user core Aug 13 00:11:13.525517 systemd[1]: sshd@54-172.234.216.155:22-139.178.89.65:46208.service: Deactivated successfully. Aug 13 00:11:13.528793 systemd[1]: session-52.scope: Deactivated successfully. Aug 13 00:11:13.529757 systemd-logind[1455]: Session 52 logged out. Waiting for processes to exit. Aug 13 00:11:13.530899 systemd-logind[1455]: Removed session 52. Aug 13 00:11:16.633882 kubelet[2606]: E0813 00:11:16.633846 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:11:18.585447 systemd[1]: Started sshd@55-172.234.216.155:22-139.178.89.65:46220.service - OpenSSH per-connection server daemon (139.178.89.65:46220). Aug 13 00:11:18.922024 sshd[4680]: Accepted publickey for core from 139.178.89.65 port 46220 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:11:18.923792 sshd-session[4680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:11:18.928875 systemd-logind[1455]: New session 53 of user core. Aug 13 00:11:18.938284 systemd[1]: Started session-53.scope - Session 53 of User core. Aug 13 00:11:19.232559 sshd[4682]: Connection closed by 139.178.89.65 port 46220 Aug 13 00:11:19.233415 sshd-session[4680]: pam_unix(sshd:session): session closed for user core Aug 13 00:11:19.237221 systemd-logind[1455]: Session 53 logged out. Waiting for processes to exit. Aug 13 00:11:19.238009 systemd[1]: sshd@55-172.234.216.155:22-139.178.89.65:46220.service: Deactivated successfully. Aug 13 00:11:19.240343 systemd[1]: session-53.scope: Deactivated successfully. Aug 13 00:11:19.241238 systemd-logind[1455]: Removed session 53. Aug 13 00:11:20.936761 kubelet[2606]: I0813 00:11:20.936726 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:11:20.936761 kubelet[2606]: I0813 00:11:20.936765 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:11:20.938040 kubelet[2606]: I0813 00:11:20.938021 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:11:20.947367 kubelet[2606]: I0813 00:11:20.947350 2606 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:11:20.947492 kubelet[2606]: I0813 00:11:20.947475 2606 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-6n7sw","kube-system/coredns-668d6bf9bc-ztmgp","kube-system/coredns-668d6bf9bc-dssvx","kube-system/cilium-gwhs6","kube-system/kube-controller-manager-172-234-216-155","kube-system/kube-proxy-cph2s","kube-system/kube-apiserver-172-234-216-155","kube-system/kube-scheduler-172-234-216-155"] Aug 13 00:11:20.947548 kubelet[2606]: E0813 00:11:20.947505 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-6n7sw" Aug 13 00:11:20.947548 kubelet[2606]: E0813 00:11:20.947515 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ztmgp" Aug 13 00:11:20.947548 kubelet[2606]: E0813 00:11:20.947523 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dssvx" Aug 13 00:11:20.947548 kubelet[2606]: E0813 00:11:20.947532 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-gwhs6" Aug 13 00:11:20.947548 kubelet[2606]: E0813 00:11:20.947541 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-216-155" Aug 13 00:11:20.947548 kubelet[2606]: E0813 00:11:20.947549 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-cph2s" Aug 13 00:11:20.947728 kubelet[2606]: E0813 00:11:20.947558 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-216-155" Aug 13 00:11:20.947728 kubelet[2606]: E0813 00:11:20.947565 2606 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-216-155" Aug 13 00:11:20.947728 kubelet[2606]: I0813 00:11:20.947585 2606 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:11:22.633592 kubelet[2606]: E0813 00:11:22.633561 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:11:23.634295 kubelet[2606]: E0813 00:11:23.633667 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:11:24.299335 systemd[1]: Started sshd@56-172.234.216.155:22-139.178.89.65:58424.service - OpenSSH per-connection server daemon (139.178.89.65:58424). Aug 13 00:11:24.633671 sshd[4694]: Accepted publickey for core from 139.178.89.65 port 58424 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:11:24.635044 sshd-session[4694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:11:24.640161 systemd-logind[1455]: New session 54 of user core. Aug 13 00:11:24.649251 systemd[1]: Started session-54.scope - Session 54 of User core. Aug 13 00:11:24.931726 sshd[4696]: Connection closed by 139.178.89.65 port 58424 Aug 13 00:11:24.932401 sshd-session[4694]: pam_unix(sshd:session): session closed for user core Aug 13 00:11:24.937496 systemd[1]: sshd@56-172.234.216.155:22-139.178.89.65:58424.service: Deactivated successfully. Aug 13 00:11:24.939891 systemd[1]: session-54.scope: Deactivated successfully. Aug 13 00:11:24.940934 systemd-logind[1455]: Session 54 logged out. Waiting for processes to exit. Aug 13 00:11:24.941787 systemd-logind[1455]: Removed session 54. Aug 13 00:11:24.997408 systemd[1]: Started sshd@57-172.234.216.155:22-139.178.89.65:58438.service - OpenSSH per-connection server daemon (139.178.89.65:58438). Aug 13 00:11:25.328699 sshd[4709]: Accepted publickey for core from 139.178.89.65 port 58438 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:11:25.330080 sshd-session[4709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:11:25.334397 systemd-logind[1455]: New session 55 of user core. Aug 13 00:11:25.341228 systemd[1]: Started session-55.scope - Session 55 of User core. Aug 13 00:11:26.814500 containerd[1483]: time="2025-08-13T00:11:26.814417950Z" level=info msg="StopContainer for \"2f25ec68c71b899c280fea3eb22b9d95d1cf42a2f2533d830b62d3cc0c603205\" with timeout 30 (s)" Aug 13 00:11:26.815583 containerd[1483]: time="2025-08-13T00:11:26.815394091Z" level=info msg="Stop container \"2f25ec68c71b899c280fea3eb22b9d95d1cf42a2f2533d830b62d3cc0c603205\" with signal terminated" Aug 13 00:11:26.828769 systemd[1]: run-containerd-runc-k8s.io-9178212477e8e318cc45fe6e1788feb933851e798b3e57d8687c5a6cc4abc877-runc.zQo71i.mount: Deactivated successfully. Aug 13 00:11:26.834458 systemd[1]: cri-containerd-2f25ec68c71b899c280fea3eb22b9d95d1cf42a2f2533d830b62d3cc0c603205.scope: Deactivated successfully. Aug 13 00:11:26.841803 containerd[1483]: time="2025-08-13T00:11:26.841770636Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:11:26.859676 containerd[1483]: time="2025-08-13T00:11:26.859651066Z" level=info msg="StopContainer for \"9178212477e8e318cc45fe6e1788feb933851e798b3e57d8687c5a6cc4abc877\" with timeout 2 (s)" Aug 13 00:11:26.860421 containerd[1483]: time="2025-08-13T00:11:26.860381937Z" level=info msg="Stop container \"9178212477e8e318cc45fe6e1788feb933851e798b3e57d8687c5a6cc4abc877\" with signal terminated" Aug 13 00:11:26.873828 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f25ec68c71b899c280fea3eb22b9d95d1cf42a2f2533d830b62d3cc0c603205-rootfs.mount: Deactivated successfully. Aug 13 00:11:26.877029 systemd-networkd[1386]: lxc_health: Link DOWN Aug 13 00:11:26.877035 systemd-networkd[1386]: lxc_health: Lost carrier Aug 13 00:11:26.888209 containerd[1483]: time="2025-08-13T00:11:26.887847712Z" level=info msg="shim disconnected" id=2f25ec68c71b899c280fea3eb22b9d95d1cf42a2f2533d830b62d3cc0c603205 namespace=k8s.io Aug 13 00:11:26.888209 containerd[1483]: time="2025-08-13T00:11:26.888047102Z" level=warning msg="cleaning up after shim disconnected" id=2f25ec68c71b899c280fea3eb22b9d95d1cf42a2f2533d830b62d3cc0c603205 namespace=k8s.io Aug 13 00:11:26.888209 containerd[1483]: time="2025-08-13T00:11:26.888058792Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:11:26.899745 systemd[1]: cri-containerd-9178212477e8e318cc45fe6e1788feb933851e798b3e57d8687c5a6cc4abc877.scope: Deactivated successfully. Aug 13 00:11:26.900588 systemd[1]: cri-containerd-9178212477e8e318cc45fe6e1788feb933851e798b3e57d8687c5a6cc4abc877.scope: Consumed 7.344s CPU time, 124.2M memory peak, 136K read from disk, 13.3M written to disk. Aug 13 00:11:26.912441 containerd[1483]: time="2025-08-13T00:11:26.912397346Z" level=info msg="StopContainer for \"2f25ec68c71b899c280fea3eb22b9d95d1cf42a2f2533d830b62d3cc0c603205\" returns successfully" Aug 13 00:11:26.913314 containerd[1483]: time="2025-08-13T00:11:26.913290977Z" level=info msg="StopPodSandbox for \"1f293f7de4a2748a415fabca4226d2ae72f132b79501af86f6893591e2d92c48\"" Aug 13 00:11:26.913383 containerd[1483]: time="2025-08-13T00:11:26.913324157Z" level=info msg="Container to stop \"2f25ec68c71b899c280fea3eb22b9d95d1cf42a2f2533d830b62d3cc0c603205\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:11:26.917012 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f293f7de4a2748a415fabca4226d2ae72f132b79501af86f6893591e2d92c48-shm.mount: Deactivated successfully. Aug 13 00:11:26.932705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9178212477e8e318cc45fe6e1788feb933851e798b3e57d8687c5a6cc4abc877-rootfs.mount: Deactivated successfully. Aug 13 00:11:26.934019 systemd[1]: cri-containerd-1f293f7de4a2748a415fabca4226d2ae72f132b79501af86f6893591e2d92c48.scope: Deactivated successfully. Aug 13 00:11:26.946599 containerd[1483]: time="2025-08-13T00:11:26.946551536Z" level=info msg="shim disconnected" id=9178212477e8e318cc45fe6e1788feb933851e798b3e57d8687c5a6cc4abc877 namespace=k8s.io Aug 13 00:11:26.947200 containerd[1483]: time="2025-08-13T00:11:26.946991146Z" level=warning msg="cleaning up after shim disconnected" id=9178212477e8e318cc45fe6e1788feb933851e798b3e57d8687c5a6cc4abc877 namespace=k8s.io Aug 13 00:11:26.948442 containerd[1483]: time="2025-08-13T00:11:26.948422447Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:11:26.974168 containerd[1483]: time="2025-08-13T00:11:26.974101622Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:11:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 00:11:26.976821 containerd[1483]: time="2025-08-13T00:11:26.976779174Z" level=info msg="shim disconnected" id=1f293f7de4a2748a415fabca4226d2ae72f132b79501af86f6893591e2d92c48 namespace=k8s.io Aug 13 00:11:26.976821 containerd[1483]: time="2025-08-13T00:11:26.976819494Z" level=warning msg="cleaning up after shim disconnected" id=1f293f7de4a2748a415fabca4226d2ae72f132b79501af86f6893591e2d92c48 namespace=k8s.io Aug 13 00:11:26.976936 containerd[1483]: time="2025-08-13T00:11:26.976828454Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:11:26.978393 containerd[1483]: time="2025-08-13T00:11:26.978368914Z" level=info msg="StopContainer for \"9178212477e8e318cc45fe6e1788feb933851e798b3e57d8687c5a6cc4abc877\" returns successfully" Aug 13 00:11:26.979459 containerd[1483]: time="2025-08-13T00:11:26.979429235Z" level=info msg="StopPodSandbox for \"fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d\"" Aug 13 00:11:26.979525 containerd[1483]: time="2025-08-13T00:11:26.979462055Z" level=info msg="Container to stop \"81ff16ca070dc3b181a034dbe43d64af63e5f9bce6c49b6e48215e9e424e1834\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:11:26.979525 containerd[1483]: time="2025-08-13T00:11:26.979491085Z" level=info msg="Container to stop \"7e144456f8d68e404cb17cc48f0b68189bd847bc82b9d8a5ee159109124572df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:11:26.979525 containerd[1483]: time="2025-08-13T00:11:26.979500105Z" level=info msg="Container to stop \"9178212477e8e318cc45fe6e1788feb933851e798b3e57d8687c5a6cc4abc877\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:11:26.979525 containerd[1483]: time="2025-08-13T00:11:26.979508335Z" level=info msg="Container to stop \"2a81b0ce59fe948e03bcf1b54b5b931ae01e555d240cd17f081c3dfa2bc6c86d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:11:26.979525 containerd[1483]: time="2025-08-13T00:11:26.979515775Z" level=info msg="Container to stop \"f41199ff6e9e506098023a58ebc3b051fe8133d1c554f8dc57ec7d27f74e6afc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:11:26.988744 systemd[1]: cri-containerd-fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d.scope: Deactivated successfully. Aug 13 00:11:27.004723 containerd[1483]: time="2025-08-13T00:11:27.004582750Z" level=info msg="TearDown network for sandbox \"1f293f7de4a2748a415fabca4226d2ae72f132b79501af86f6893591e2d92c48\" successfully" Aug 13 00:11:27.004723 containerd[1483]: time="2025-08-13T00:11:27.004610790Z" level=info msg="StopPodSandbox for \"1f293f7de4a2748a415fabca4226d2ae72f132b79501af86f6893591e2d92c48\" returns successfully" Aug 13 00:11:27.024605 containerd[1483]: time="2025-08-13T00:11:27.024431331Z" level=info msg="shim disconnected" id=fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d namespace=k8s.io Aug 13 00:11:27.024605 containerd[1483]: time="2025-08-13T00:11:27.024480791Z" level=warning msg="cleaning up after shim disconnected" id=fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d namespace=k8s.io Aug 13 00:11:27.024605 containerd[1483]: time="2025-08-13T00:11:27.024489361Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:11:27.041162 containerd[1483]: time="2025-08-13T00:11:27.041087400Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:11:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 00:11:27.042511 containerd[1483]: time="2025-08-13T00:11:27.042466991Z" level=info msg="TearDown network for sandbox \"fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d\" successfully" Aug 13 00:11:27.042511 containerd[1483]: time="2025-08-13T00:11:27.042498731Z" level=info msg="StopPodSandbox for \"fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d\" returns successfully" Aug 13 00:11:27.144817 kubelet[2606]: I0813 00:11:27.143268 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-cilium-run\") pod \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " Aug 13 00:11:27.144817 kubelet[2606]: I0813 00:11:27.143297 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-etc-cni-netd\") pod \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " Aug 13 00:11:27.144817 kubelet[2606]: I0813 00:11:27.143320 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a55ef678-4cfd-4ae6-9805-0df4627ac65c-cilium-config-path\") pod \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " Aug 13 00:11:27.144817 kubelet[2606]: I0813 00:11:27.143601 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a55ef678-4cfd-4ae6-9805-0df4627ac65c" (UID: "a55ef678-4cfd-4ae6-9805-0df4627ac65c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:11:27.144817 kubelet[2606]: I0813 00:11:27.143665 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-bpf-maps\") pod \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " Aug 13 00:11:27.144817 kubelet[2606]: I0813 00:11:27.143684 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-host-proc-sys-kernel\") pod \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " Aug 13 00:11:27.145407 kubelet[2606]: I0813 00:11:27.143706 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a55ef678-4cfd-4ae6-9805-0df4627ac65c-hubble-tls\") pod \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " Aug 13 00:11:27.145407 kubelet[2606]: I0813 00:11:27.143725 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a55ef678-4cfd-4ae6-9805-0df4627ac65c-clustermesh-secrets\") pod \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " Aug 13 00:11:27.145407 kubelet[2606]: I0813 00:11:27.143742 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-hostproc\") pod \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " Aug 13 00:11:27.145407 kubelet[2606]: I0813 00:11:27.143761 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/879e972d-983e-4dc8-98f9-bf6d900db40e-cilium-config-path\") pod \"879e972d-983e-4dc8-98f9-bf6d900db40e\" (UID: \"879e972d-983e-4dc8-98f9-bf6d900db40e\") " Aug 13 00:11:27.145407 kubelet[2606]: I0813 00:11:27.143774 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-cilium-cgroup\") pod \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " Aug 13 00:11:27.145407 kubelet[2606]: I0813 00:11:27.143789 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-lib-modules\") pod \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " Aug 13 00:11:27.145540 kubelet[2606]: I0813 00:11:27.143802 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-xtables-lock\") pod \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " Aug 13 00:11:27.145540 kubelet[2606]: I0813 00:11:27.143815 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-cni-path\") pod \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " Aug 13 00:11:27.145540 kubelet[2606]: I0813 00:11:27.143831 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-host-proc-sys-net\") pod \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " Aug 13 00:11:27.145540 kubelet[2606]: I0813 00:11:27.143848 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2hvs\" (UniqueName: \"kubernetes.io/projected/a55ef678-4cfd-4ae6-9805-0df4627ac65c-kube-api-access-h2hvs\") pod \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\" (UID: \"a55ef678-4cfd-4ae6-9805-0df4627ac65c\") " Aug 13 00:11:27.145540 kubelet[2606]: I0813 00:11:27.143865 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sghg8\" (UniqueName: \"kubernetes.io/projected/879e972d-983e-4dc8-98f9-bf6d900db40e-kube-api-access-sghg8\") pod \"879e972d-983e-4dc8-98f9-bf6d900db40e\" (UID: \"879e972d-983e-4dc8-98f9-bf6d900db40e\") " Aug 13 00:11:27.145540 kubelet[2606]: I0813 00:11:27.143906 2606 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-cilium-run\") on node \"172-234-216-155\" DevicePath \"\"" Aug 13 00:11:27.146154 kubelet[2606]: I0813 00:11:27.145712 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a55ef678-4cfd-4ae6-9805-0df4627ac65c" (UID: "a55ef678-4cfd-4ae6-9805-0df4627ac65c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:11:27.146451 kubelet[2606]: I0813 00:11:27.146422 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a55ef678-4cfd-4ae6-9805-0df4627ac65c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a55ef678-4cfd-4ae6-9805-0df4627ac65c" (UID: "a55ef678-4cfd-4ae6-9805-0df4627ac65c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:11:27.148606 kubelet[2606]: I0813 00:11:27.148579 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/879e972d-983e-4dc8-98f9-bf6d900db40e-kube-api-access-sghg8" (OuterVolumeSpecName: "kube-api-access-sghg8") pod "879e972d-983e-4dc8-98f9-bf6d900db40e" (UID: "879e972d-983e-4dc8-98f9-bf6d900db40e"). InnerVolumeSpecName "kube-api-access-sghg8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:11:27.149179 kubelet[2606]: I0813 00:11:27.148747 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a55ef678-4cfd-4ae6-9805-0df4627ac65c" (UID: "a55ef678-4cfd-4ae6-9805-0df4627ac65c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:11:27.149179 kubelet[2606]: I0813 00:11:27.148780 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a55ef678-4cfd-4ae6-9805-0df4627ac65c" (UID: "a55ef678-4cfd-4ae6-9805-0df4627ac65c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:11:27.149179 kubelet[2606]: I0813 00:11:27.148807 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a55ef678-4cfd-4ae6-9805-0df4627ac65c" (UID: "a55ef678-4cfd-4ae6-9805-0df4627ac65c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:11:27.149179 kubelet[2606]: I0813 00:11:27.148829 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-cni-path" (OuterVolumeSpecName: "cni-path") pod "a55ef678-4cfd-4ae6-9805-0df4627ac65c" (UID: "a55ef678-4cfd-4ae6-9805-0df4627ac65c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:11:27.149179 kubelet[2606]: I0813 00:11:27.148849 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a55ef678-4cfd-4ae6-9805-0df4627ac65c" (UID: "a55ef678-4cfd-4ae6-9805-0df4627ac65c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:11:27.150116 kubelet[2606]: I0813 00:11:27.149991 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/879e972d-983e-4dc8-98f9-bf6d900db40e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "879e972d-983e-4dc8-98f9-bf6d900db40e" (UID: "879e972d-983e-4dc8-98f9-bf6d900db40e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:11:27.150116 kubelet[2606]: I0813 00:11:27.150029 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a55ef678-4cfd-4ae6-9805-0df4627ac65c" (UID: "a55ef678-4cfd-4ae6-9805-0df4627ac65c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:11:27.150116 kubelet[2606]: I0813 00:11:27.150046 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a55ef678-4cfd-4ae6-9805-0df4627ac65c" (UID: "a55ef678-4cfd-4ae6-9805-0df4627ac65c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:11:27.152223 kubelet[2606]: I0813 00:11:27.152201 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a55ef678-4cfd-4ae6-9805-0df4627ac65c-kube-api-access-h2hvs" (OuterVolumeSpecName: "kube-api-access-h2hvs") pod "a55ef678-4cfd-4ae6-9805-0df4627ac65c" (UID: "a55ef678-4cfd-4ae6-9805-0df4627ac65c"). InnerVolumeSpecName "kube-api-access-h2hvs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:11:27.152976 kubelet[2606]: I0813 00:11:27.152934 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a55ef678-4cfd-4ae6-9805-0df4627ac65c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a55ef678-4cfd-4ae6-9805-0df4627ac65c" (UID: "a55ef678-4cfd-4ae6-9805-0df4627ac65c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:11:27.153022 kubelet[2606]: I0813 00:11:27.152986 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-hostproc" (OuterVolumeSpecName: "hostproc") pod "a55ef678-4cfd-4ae6-9805-0df4627ac65c" (UID: "a55ef678-4cfd-4ae6-9805-0df4627ac65c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:11:27.155018 kubelet[2606]: I0813 00:11:27.154981 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a55ef678-4cfd-4ae6-9805-0df4627ac65c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a55ef678-4cfd-4ae6-9805-0df4627ac65c" (UID: "a55ef678-4cfd-4ae6-9805-0df4627ac65c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:11:27.244530 kubelet[2606]: I0813 00:11:27.244495 2606 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-lib-modules\") on node \"172-234-216-155\" DevicePath \"\"" Aug 13 00:11:27.244530 kubelet[2606]: I0813 00:11:27.244522 2606 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-xtables-lock\") on node \"172-234-216-155\" DevicePath \"\"" Aug 13 00:11:27.244530 kubelet[2606]: I0813 00:11:27.244531 2606 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-cni-path\") on node \"172-234-216-155\" DevicePath \"\"" Aug 13 00:11:27.244662 kubelet[2606]: I0813 00:11:27.244539 2606 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h2hvs\" (UniqueName: \"kubernetes.io/projected/a55ef678-4cfd-4ae6-9805-0df4627ac65c-kube-api-access-h2hvs\") on node \"172-234-216-155\" DevicePath \"\"" Aug 13 00:11:27.244662 kubelet[2606]: I0813 00:11:27.244549 2606 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sghg8\" (UniqueName: \"kubernetes.io/projected/879e972d-983e-4dc8-98f9-bf6d900db40e-kube-api-access-sghg8\") on node \"172-234-216-155\" DevicePath \"\"" Aug 13 00:11:27.244662 kubelet[2606]: I0813 00:11:27.244557 2606 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-host-proc-sys-net\") on node \"172-234-216-155\" DevicePath \"\"" Aug 13 00:11:27.244662 kubelet[2606]: I0813 00:11:27.244565 2606 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-etc-cni-netd\") on node \"172-234-216-155\" DevicePath \"\"" Aug 13 00:11:27.244662 kubelet[2606]: I0813 00:11:27.244572 2606 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a55ef678-4cfd-4ae6-9805-0df4627ac65c-cilium-config-path\") on node \"172-234-216-155\" DevicePath \"\"" Aug 13 00:11:27.244662 kubelet[2606]: I0813 00:11:27.244580 2606 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a55ef678-4cfd-4ae6-9805-0df4627ac65c-hubble-tls\") on node \"172-234-216-155\" DevicePath \"\"" Aug 13 00:11:27.244662 kubelet[2606]: I0813 00:11:27.244587 2606 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-bpf-maps\") on node \"172-234-216-155\" DevicePath \"\"" Aug 13 00:11:27.244662 kubelet[2606]: I0813 00:11:27.244595 2606 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-host-proc-sys-kernel\") on node \"172-234-216-155\" DevicePath \"\"" Aug 13 00:11:27.244836 kubelet[2606]: I0813 00:11:27.244602 2606 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a55ef678-4cfd-4ae6-9805-0df4627ac65c-clustermesh-secrets\") on node \"172-234-216-155\" DevicePath \"\"" Aug 13 00:11:27.244836 kubelet[2606]: I0813 00:11:27.244610 2606 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-hostproc\") on node \"172-234-216-155\" DevicePath \"\"" Aug 13 00:11:27.244836 kubelet[2606]: I0813 00:11:27.244617 2606 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a55ef678-4cfd-4ae6-9805-0df4627ac65c-cilium-cgroup\") on node \"172-234-216-155\" DevicePath \"\"" Aug 13 00:11:27.244836 kubelet[2606]: I0813 00:11:27.244624 2606 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/879e972d-983e-4dc8-98f9-bf6d900db40e-cilium-config-path\") on node \"172-234-216-155\" DevicePath \"\"" Aug 13 00:11:27.371445 kubelet[2606]: I0813 00:11:27.370401 2606 scope.go:117] "RemoveContainer" containerID="2f25ec68c71b899c280fea3eb22b9d95d1cf42a2f2533d830b62d3cc0c603205" Aug 13 00:11:27.373142 containerd[1483]: time="2025-08-13T00:11:27.372842216Z" level=info msg="RemoveContainer for \"2f25ec68c71b899c280fea3eb22b9d95d1cf42a2f2533d830b62d3cc0c603205\"" Aug 13 00:11:27.376933 containerd[1483]: time="2025-08-13T00:11:27.376890379Z" level=info msg="RemoveContainer for \"2f25ec68c71b899c280fea3eb22b9d95d1cf42a2f2533d830b62d3cc0c603205\" returns successfully" Aug 13 00:11:27.379271 kubelet[2606]: I0813 00:11:27.378975 2606 scope.go:117] "RemoveContainer" containerID="2f25ec68c71b899c280fea3eb22b9d95d1cf42a2f2533d830b62d3cc0c603205" Aug 13 00:11:27.379980 containerd[1483]: time="2025-08-13T00:11:27.379929780Z" level=error msg="ContainerStatus for \"2f25ec68c71b899c280fea3eb22b9d95d1cf42a2f2533d830b62d3cc0c603205\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f25ec68c71b899c280fea3eb22b9d95d1cf42a2f2533d830b62d3cc0c603205\": not found" Aug 13 00:11:27.381114 kubelet[2606]: E0813 00:11:27.380869 2606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f25ec68c71b899c280fea3eb22b9d95d1cf42a2f2533d830b62d3cc0c603205\": not found" containerID="2f25ec68c71b899c280fea3eb22b9d95d1cf42a2f2533d830b62d3cc0c603205" Aug 13 00:11:27.381114 kubelet[2606]: I0813 00:11:27.380899 2606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f25ec68c71b899c280fea3eb22b9d95d1cf42a2f2533d830b62d3cc0c603205"} err="failed to get container status \"2f25ec68c71b899c280fea3eb22b9d95d1cf42a2f2533d830b62d3cc0c603205\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f25ec68c71b899c280fea3eb22b9d95d1cf42a2f2533d830b62d3cc0c603205\": not found" Aug 13 00:11:27.382521 systemd[1]: Removed slice kubepods-besteffort-pod879e972d_983e_4dc8_98f9_bf6d900db40e.slice - libcontainer container kubepods-besteffort-pod879e972d_983e_4dc8_98f9_bf6d900db40e.slice. Aug 13 00:11:27.383615 kubelet[2606]: I0813 00:11:27.383598 2606 scope.go:117] "RemoveContainer" containerID="9178212477e8e318cc45fe6e1788feb933851e798b3e57d8687c5a6cc4abc877" Aug 13 00:11:27.389800 systemd[1]: Removed slice kubepods-burstable-poda55ef678_4cfd_4ae6_9805_0df4627ac65c.slice - libcontainer container kubepods-burstable-poda55ef678_4cfd_4ae6_9805_0df4627ac65c.slice. Aug 13 00:11:27.389883 systemd[1]: kubepods-burstable-poda55ef678_4cfd_4ae6_9805_0df4627ac65c.slice: Consumed 7.452s CPU time, 124.7M memory peak, 136K read from disk, 13.3M written to disk. Aug 13 00:11:27.392530 containerd[1483]: time="2025-08-13T00:11:27.392248027Z" level=info msg="RemoveContainer for \"9178212477e8e318cc45fe6e1788feb933851e798b3e57d8687c5a6cc4abc877\"" Aug 13 00:11:27.397253 containerd[1483]: time="2025-08-13T00:11:27.396196219Z" level=info msg="RemoveContainer for \"9178212477e8e318cc45fe6e1788feb933851e798b3e57d8687c5a6cc4abc877\" returns successfully" Aug 13 00:11:27.397317 kubelet[2606]: I0813 00:11:27.396345 2606 scope.go:117] "RemoveContainer" containerID="f41199ff6e9e506098023a58ebc3b051fe8133d1c554f8dc57ec7d27f74e6afc" Aug 13 00:11:27.397516 containerd[1483]: time="2025-08-13T00:11:27.397481240Z" level=info msg="RemoveContainer for \"f41199ff6e9e506098023a58ebc3b051fe8133d1c554f8dc57ec7d27f74e6afc\"" Aug 13 00:11:27.399938 containerd[1483]: time="2025-08-13T00:11:27.399872872Z" level=info msg="RemoveContainer for \"f41199ff6e9e506098023a58ebc3b051fe8133d1c554f8dc57ec7d27f74e6afc\" returns successfully" Aug 13 00:11:27.400086 kubelet[2606]: I0813 00:11:27.399992 2606 scope.go:117] "RemoveContainer" containerID="7e144456f8d68e404cb17cc48f0b68189bd847bc82b9d8a5ee159109124572df" Aug 13 00:11:27.402578 containerd[1483]: time="2025-08-13T00:11:27.402540773Z" level=info msg="RemoveContainer for \"7e144456f8d68e404cb17cc48f0b68189bd847bc82b9d8a5ee159109124572df\"" Aug 13 00:11:27.405966 containerd[1483]: time="2025-08-13T00:11:27.405844785Z" level=info msg="RemoveContainer for \"7e144456f8d68e404cb17cc48f0b68189bd847bc82b9d8a5ee159109124572df\" returns successfully" Aug 13 00:11:27.406019 kubelet[2606]: I0813 00:11:27.405979 2606 scope.go:117] "RemoveContainer" containerID="2a81b0ce59fe948e03bcf1b54b5b931ae01e555d240cd17f081c3dfa2bc6c86d" Aug 13 00:11:27.407277 containerd[1483]: time="2025-08-13T00:11:27.407238316Z" level=info msg="RemoveContainer for \"2a81b0ce59fe948e03bcf1b54b5b931ae01e555d240cd17f081c3dfa2bc6c86d\"" Aug 13 00:11:27.409733 containerd[1483]: time="2025-08-13T00:11:27.409661807Z" level=info msg="RemoveContainer for \"2a81b0ce59fe948e03bcf1b54b5b931ae01e555d240cd17f081c3dfa2bc6c86d\" returns successfully" Aug 13 00:11:27.409913 kubelet[2606]: I0813 00:11:27.409850 2606 scope.go:117] "RemoveContainer" containerID="81ff16ca070dc3b181a034dbe43d64af63e5f9bce6c49b6e48215e9e424e1834" Aug 13 00:11:27.411453 containerd[1483]: time="2025-08-13T00:11:27.411252898Z" level=info msg="RemoveContainer for \"81ff16ca070dc3b181a034dbe43d64af63e5f9bce6c49b6e48215e9e424e1834\"" Aug 13 00:11:27.414823 containerd[1483]: time="2025-08-13T00:11:27.414789000Z" level=info msg="RemoveContainer for \"81ff16ca070dc3b181a034dbe43d64af63e5f9bce6c49b6e48215e9e424e1834\" returns successfully" Aug 13 00:11:27.415077 kubelet[2606]: I0813 00:11:27.414954 2606 scope.go:117] "RemoveContainer" containerID="9178212477e8e318cc45fe6e1788feb933851e798b3e57d8687c5a6cc4abc877" Aug 13 00:11:27.415242 containerd[1483]: time="2025-08-13T00:11:27.415195030Z" level=error msg="ContainerStatus for \"9178212477e8e318cc45fe6e1788feb933851e798b3e57d8687c5a6cc4abc877\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9178212477e8e318cc45fe6e1788feb933851e798b3e57d8687c5a6cc4abc877\": not found" Aug 13 00:11:27.415454 kubelet[2606]: E0813 00:11:27.415435 2606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9178212477e8e318cc45fe6e1788feb933851e798b3e57d8687c5a6cc4abc877\": not found" containerID="9178212477e8e318cc45fe6e1788feb933851e798b3e57d8687c5a6cc4abc877" Aug 13 00:11:27.415541 kubelet[2606]: I0813 00:11:27.415513 2606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9178212477e8e318cc45fe6e1788feb933851e798b3e57d8687c5a6cc4abc877"} err="failed to get container status \"9178212477e8e318cc45fe6e1788feb933851e798b3e57d8687c5a6cc4abc877\": rpc error: code = NotFound desc = an error occurred when try to find container \"9178212477e8e318cc45fe6e1788feb933851e798b3e57d8687c5a6cc4abc877\": not found" Aug 13 00:11:27.415541 kubelet[2606]: I0813 00:11:27.415537 2606 scope.go:117] "RemoveContainer" containerID="f41199ff6e9e506098023a58ebc3b051fe8133d1c554f8dc57ec7d27f74e6afc" Aug 13 00:11:27.415711 containerd[1483]: time="2025-08-13T00:11:27.415680380Z" level=error msg="ContainerStatus for \"f41199ff6e9e506098023a58ebc3b051fe8133d1c554f8dc57ec7d27f74e6afc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f41199ff6e9e506098023a58ebc3b051fe8133d1c554f8dc57ec7d27f74e6afc\": not found" Aug 13 00:11:27.416058 kubelet[2606]: E0813 00:11:27.415828 2606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f41199ff6e9e506098023a58ebc3b051fe8133d1c554f8dc57ec7d27f74e6afc\": not found" containerID="f41199ff6e9e506098023a58ebc3b051fe8133d1c554f8dc57ec7d27f74e6afc" Aug 13 00:11:27.416058 kubelet[2606]: I0813 00:11:27.415852 2606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f41199ff6e9e506098023a58ebc3b051fe8133d1c554f8dc57ec7d27f74e6afc"} err="failed to get container status \"f41199ff6e9e506098023a58ebc3b051fe8133d1c554f8dc57ec7d27f74e6afc\": rpc error: code = NotFound desc = an error occurred when try to find container \"f41199ff6e9e506098023a58ebc3b051fe8133d1c554f8dc57ec7d27f74e6afc\": not found" Aug 13 00:11:27.416058 kubelet[2606]: I0813 00:11:27.415868 2606 scope.go:117] "RemoveContainer" containerID="7e144456f8d68e404cb17cc48f0b68189bd847bc82b9d8a5ee159109124572df" Aug 13 00:11:27.416183 containerd[1483]: time="2025-08-13T00:11:27.416004281Z" level=error msg="ContainerStatus for \"7e144456f8d68e404cb17cc48f0b68189bd847bc82b9d8a5ee159109124572df\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e144456f8d68e404cb17cc48f0b68189bd847bc82b9d8a5ee159109124572df\": not found" Aug 13 00:11:27.416234 kubelet[2606]: E0813 00:11:27.416212 2606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e144456f8d68e404cb17cc48f0b68189bd847bc82b9d8a5ee159109124572df\": not found" containerID="7e144456f8d68e404cb17cc48f0b68189bd847bc82b9d8a5ee159109124572df" Aug 13 00:11:27.416264 kubelet[2606]: I0813 00:11:27.416234 2606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e144456f8d68e404cb17cc48f0b68189bd847bc82b9d8a5ee159109124572df"} err="failed to get container status \"7e144456f8d68e404cb17cc48f0b68189bd847bc82b9d8a5ee159109124572df\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e144456f8d68e404cb17cc48f0b68189bd847bc82b9d8a5ee159109124572df\": not found" Aug 13 00:11:27.416264 kubelet[2606]: I0813 00:11:27.416260 2606 scope.go:117] "RemoveContainer" containerID="2a81b0ce59fe948e03bcf1b54b5b931ae01e555d240cd17f081c3dfa2bc6c86d" Aug 13 00:11:27.416420 containerd[1483]: time="2025-08-13T00:11:27.416385871Z" level=error msg="ContainerStatus for \"2a81b0ce59fe948e03bcf1b54b5b931ae01e555d240cd17f081c3dfa2bc6c86d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2a81b0ce59fe948e03bcf1b54b5b931ae01e555d240cd17f081c3dfa2bc6c86d\": not found" Aug 13 00:11:27.416527 kubelet[2606]: E0813 00:11:27.416508 2606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2a81b0ce59fe948e03bcf1b54b5b931ae01e555d240cd17f081c3dfa2bc6c86d\": not found" containerID="2a81b0ce59fe948e03bcf1b54b5b931ae01e555d240cd17f081c3dfa2bc6c86d" Aug 13 00:11:27.416600 kubelet[2606]: I0813 00:11:27.416572 2606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2a81b0ce59fe948e03bcf1b54b5b931ae01e555d240cd17f081c3dfa2bc6c86d"} err="failed to get container status \"2a81b0ce59fe948e03bcf1b54b5b931ae01e555d240cd17f081c3dfa2bc6c86d\": rpc error: code = NotFound desc = an error occurred when try to find container \"2a81b0ce59fe948e03bcf1b54b5b931ae01e555d240cd17f081c3dfa2bc6c86d\": not found" Aug 13 00:11:27.416600 kubelet[2606]: I0813 00:11:27.416594 2606 scope.go:117] "RemoveContainer" containerID="81ff16ca070dc3b181a034dbe43d64af63e5f9bce6c49b6e48215e9e424e1834" Aug 13 00:11:27.416772 containerd[1483]: time="2025-08-13T00:11:27.416743991Z" level=error msg="ContainerStatus for \"81ff16ca070dc3b181a034dbe43d64af63e5f9bce6c49b6e48215e9e424e1834\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"81ff16ca070dc3b181a034dbe43d64af63e5f9bce6c49b6e48215e9e424e1834\": not found" Aug 13 00:11:27.416878 kubelet[2606]: E0813 00:11:27.416856 2606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"81ff16ca070dc3b181a034dbe43d64af63e5f9bce6c49b6e48215e9e424e1834\": not found" containerID="81ff16ca070dc3b181a034dbe43d64af63e5f9bce6c49b6e48215e9e424e1834" Aug 13 00:11:27.416949 kubelet[2606]: I0813 00:11:27.416879 2606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"81ff16ca070dc3b181a034dbe43d64af63e5f9bce6c49b6e48215e9e424e1834"} err="failed to get container status \"81ff16ca070dc3b181a034dbe43d64af63e5f9bce6c49b6e48215e9e424e1834\": rpc error: code = NotFound desc = an error occurred when try to find container \"81ff16ca070dc3b181a034dbe43d64af63e5f9bce6c49b6e48215e9e424e1834\": not found" Aug 13 00:11:27.635667 kubelet[2606]: I0813 00:11:27.635618 2606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="879e972d-983e-4dc8-98f9-bf6d900db40e" path="/var/lib/kubelet/pods/879e972d-983e-4dc8-98f9-bf6d900db40e/volumes" Aug 13 00:11:27.636197 kubelet[2606]: I0813 00:11:27.636169 2606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a55ef678-4cfd-4ae6-9805-0df4627ac65c" path="/var/lib/kubelet/pods/a55ef678-4cfd-4ae6-9805-0df4627ac65c/volumes" Aug 13 00:11:27.819235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d-rootfs.mount: Deactivated successfully. Aug 13 00:11:27.819373 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d-shm.mount: Deactivated successfully. Aug 13 00:11:27.819458 systemd[1]: var-lib-kubelet-pods-a55ef678\x2d4cfd\x2d4ae6\x2d9805\x2d0df4627ac65c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh2hvs.mount: Deactivated successfully. Aug 13 00:11:27.819542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f293f7de4a2748a415fabca4226d2ae72f132b79501af86f6893591e2d92c48-rootfs.mount: Deactivated successfully. Aug 13 00:11:27.819614 systemd[1]: var-lib-kubelet-pods-879e972d\x2d983e\x2d4dc8\x2d98f9\x2dbf6d900db40e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsghg8.mount: Deactivated successfully. Aug 13 00:11:27.819690 systemd[1]: var-lib-kubelet-pods-a55ef678\x2d4cfd\x2d4ae6\x2d9805\x2d0df4627ac65c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:11:27.819763 systemd[1]: var-lib-kubelet-pods-a55ef678\x2d4cfd\x2d4ae6\x2d9805\x2d0df4627ac65c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:11:28.820949 sshd[4711]: Connection closed by 139.178.89.65 port 58438 Aug 13 00:11:28.821861 sshd-session[4709]: pam_unix(sshd:session): session closed for user core Aug 13 00:11:28.825387 systemd[1]: sshd@57-172.234.216.155:22-139.178.89.65:58438.service: Deactivated successfully. Aug 13 00:11:28.828009 systemd[1]: session-55.scope: Deactivated successfully. Aug 13 00:11:28.829342 systemd-logind[1455]: Session 55 logged out. Waiting for processes to exit. Aug 13 00:11:28.830653 systemd-logind[1455]: Removed session 55. Aug 13 00:11:28.882324 systemd[1]: Started sshd@58-172.234.216.155:22-139.178.89.65:58440.service - OpenSSH per-connection server daemon (139.178.89.65:58440). Aug 13 00:11:29.202795 sshd[4874]: Accepted publickey for core from 139.178.89.65 port 58440 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:11:29.204626 sshd-session[4874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:11:29.209544 systemd-logind[1455]: New session 56 of user core. Aug 13 00:11:29.216250 systemd[1]: Started session-56.scope - Session 56 of User core. Aug 13 00:11:29.635048 containerd[1483]: time="2025-08-13T00:11:29.634979220Z" level=info msg="StopPodSandbox for \"1f293f7de4a2748a415fabca4226d2ae72f132b79501af86f6893591e2d92c48\"" Aug 13 00:11:29.635986 containerd[1483]: time="2025-08-13T00:11:29.635531701Z" level=info msg="TearDown network for sandbox \"1f293f7de4a2748a415fabca4226d2ae72f132b79501af86f6893591e2d92c48\" successfully" Aug 13 00:11:29.635986 containerd[1483]: time="2025-08-13T00:11:29.635552041Z" level=info msg="StopPodSandbox for \"1f293f7de4a2748a415fabca4226d2ae72f132b79501af86f6893591e2d92c48\" returns successfully" Aug 13 00:11:29.636360 containerd[1483]: time="2025-08-13T00:11:29.636299161Z" level=info msg="RemovePodSandbox for \"1f293f7de4a2748a415fabca4226d2ae72f132b79501af86f6893591e2d92c48\"" Aug 13 00:11:29.636360 containerd[1483]: time="2025-08-13T00:11:29.636326631Z" level=info msg="Forcibly stopping sandbox \"1f293f7de4a2748a415fabca4226d2ae72f132b79501af86f6893591e2d92c48\"" Aug 13 00:11:29.636530 containerd[1483]: time="2025-08-13T00:11:29.636375141Z" level=info msg="TearDown network for sandbox \"1f293f7de4a2748a415fabca4226d2ae72f132b79501af86f6893591e2d92c48\" successfully" Aug 13 00:11:29.642183 containerd[1483]: time="2025-08-13T00:11:29.642114334Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f293f7de4a2748a415fabca4226d2ae72f132b79501af86f6893591e2d92c48\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:11:29.642254 containerd[1483]: time="2025-08-13T00:11:29.642202824Z" level=info msg="RemovePodSandbox \"1f293f7de4a2748a415fabca4226d2ae72f132b79501af86f6893591e2d92c48\" returns successfully" Aug 13 00:11:29.643161 containerd[1483]: time="2025-08-13T00:11:29.642590224Z" level=info msg="StopPodSandbox for \"fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d\"" Aug 13 00:11:29.643161 containerd[1483]: time="2025-08-13T00:11:29.642651595Z" level=info msg="TearDown network for sandbox \"fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d\" successfully" Aug 13 00:11:29.643161 containerd[1483]: time="2025-08-13T00:11:29.642661545Z" level=info msg="StopPodSandbox for \"fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d\" returns successfully" Aug 13 00:11:29.643161 containerd[1483]: time="2025-08-13T00:11:29.643002235Z" level=info msg="RemovePodSandbox for \"fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d\"" Aug 13 00:11:29.643161 containerd[1483]: time="2025-08-13T00:11:29.643024355Z" level=info msg="Forcibly stopping sandbox \"fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d\"" Aug 13 00:11:29.643283 containerd[1483]: time="2025-08-13T00:11:29.643110785Z" level=info msg="TearDown network for sandbox \"fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d\" successfully" Aug 13 00:11:29.646518 containerd[1483]: time="2025-08-13T00:11:29.646484837Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:11:29.646565 containerd[1483]: time="2025-08-13T00:11:29.646520257Z" level=info msg="RemovePodSandbox \"fca81e0b875994035fafeb5537a6eb402156aebbb3922721c80e0a46c6ec680d\" returns successfully" Aug 13 00:11:29.679940 kubelet[2606]: I0813 00:11:29.679894 2606 memory_manager.go:355] "RemoveStaleState removing state" podUID="879e972d-983e-4dc8-98f9-bf6d900db40e" containerName="cilium-operator" Aug 13 00:11:29.679940 kubelet[2606]: I0813 00:11:29.679925 2606 memory_manager.go:355] "RemoveStaleState removing state" podUID="a55ef678-4cfd-4ae6-9805-0df4627ac65c" containerName="cilium-agent" Aug 13 00:11:29.691304 systemd[1]: Created slice kubepods-burstable-pod99b0eb50_0be4_41e9_9d10_2fc355896899.slice - libcontainer container kubepods-burstable-pod99b0eb50_0be4_41e9_9d10_2fc355896899.slice. Aug 13 00:11:29.714161 sshd[4876]: Connection closed by 139.178.89.65 port 58440 Aug 13 00:11:29.714791 sshd-session[4874]: pam_unix(sshd:session): session closed for user core Aug 13 00:11:29.720946 systemd[1]: sshd@58-172.234.216.155:22-139.178.89.65:58440.service: Deactivated successfully. Aug 13 00:11:29.721380 systemd-logind[1455]: Session 56 logged out. Waiting for processes to exit. Aug 13 00:11:29.724911 systemd[1]: session-56.scope: Deactivated successfully. Aug 13 00:11:29.728275 systemd-logind[1455]: Removed session 56. Aug 13 00:11:29.759378 kubelet[2606]: I0813 00:11:29.759342 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/99b0eb50-0be4-41e9-9d10-2fc355896899-hostproc\") pod \"cilium-zg9sl\" (UID: \"99b0eb50-0be4-41e9-9d10-2fc355896899\") " pod="kube-system/cilium-zg9sl" Aug 13 00:11:29.759378 kubelet[2606]: I0813 00:11:29.759378 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/99b0eb50-0be4-41e9-9d10-2fc355896899-cilium-cgroup\") pod \"cilium-zg9sl\" (UID: \"99b0eb50-0be4-41e9-9d10-2fc355896899\") " pod="kube-system/cilium-zg9sl" Aug 13 00:11:29.759378 kubelet[2606]: I0813 00:11:29.759399 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/99b0eb50-0be4-41e9-9d10-2fc355896899-etc-cni-netd\") pod \"cilium-zg9sl\" (UID: \"99b0eb50-0be4-41e9-9d10-2fc355896899\") " pod="kube-system/cilium-zg9sl" Aug 13 00:11:29.759605 kubelet[2606]: I0813 00:11:29.759414 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99b0eb50-0be4-41e9-9d10-2fc355896899-lib-modules\") pod \"cilium-zg9sl\" (UID: \"99b0eb50-0be4-41e9-9d10-2fc355896899\") " pod="kube-system/cilium-zg9sl" Aug 13 00:11:29.759605 kubelet[2606]: I0813 00:11:29.759432 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/99b0eb50-0be4-41e9-9d10-2fc355896899-cilium-ipsec-secrets\") pod \"cilium-zg9sl\" (UID: \"99b0eb50-0be4-41e9-9d10-2fc355896899\") " pod="kube-system/cilium-zg9sl" Aug 13 00:11:29.759605 kubelet[2606]: I0813 00:11:29.759453 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/99b0eb50-0be4-41e9-9d10-2fc355896899-cilium-config-path\") pod \"cilium-zg9sl\" (UID: \"99b0eb50-0be4-41e9-9d10-2fc355896899\") " pod="kube-system/cilium-zg9sl" Aug 13 00:11:29.759605 kubelet[2606]: I0813 00:11:29.759497 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/99b0eb50-0be4-41e9-9d10-2fc355896899-host-proc-sys-kernel\") pod \"cilium-zg9sl\" (UID: \"99b0eb50-0be4-41e9-9d10-2fc355896899\") " pod="kube-system/cilium-zg9sl" Aug 13 00:11:29.759605 kubelet[2606]: I0813 00:11:29.759521 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/99b0eb50-0be4-41e9-9d10-2fc355896899-cilium-run\") pod \"cilium-zg9sl\" (UID: \"99b0eb50-0be4-41e9-9d10-2fc355896899\") " pod="kube-system/cilium-zg9sl" Aug 13 00:11:29.759733 kubelet[2606]: I0813 00:11:29.759538 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/99b0eb50-0be4-41e9-9d10-2fc355896899-host-proc-sys-net\") pod \"cilium-zg9sl\" (UID: \"99b0eb50-0be4-41e9-9d10-2fc355896899\") " pod="kube-system/cilium-zg9sl" Aug 13 00:11:29.759733 kubelet[2606]: I0813 00:11:29.759558 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/99b0eb50-0be4-41e9-9d10-2fc355896899-cni-path\") pod \"cilium-zg9sl\" (UID: \"99b0eb50-0be4-41e9-9d10-2fc355896899\") " pod="kube-system/cilium-zg9sl" Aug 13 00:11:29.759733 kubelet[2606]: I0813 00:11:29.759576 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/99b0eb50-0be4-41e9-9d10-2fc355896899-bpf-maps\") pod \"cilium-zg9sl\" (UID: \"99b0eb50-0be4-41e9-9d10-2fc355896899\") " pod="kube-system/cilium-zg9sl" Aug 13 00:11:29.759733 kubelet[2606]: I0813 00:11:29.759593 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/99b0eb50-0be4-41e9-9d10-2fc355896899-clustermesh-secrets\") pod \"cilium-zg9sl\" (UID: \"99b0eb50-0be4-41e9-9d10-2fc355896899\") " pod="kube-system/cilium-zg9sl" Aug 13 00:11:29.759733 kubelet[2606]: I0813 00:11:29.759607 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9xwk\" (UniqueName: \"kubernetes.io/projected/99b0eb50-0be4-41e9-9d10-2fc355896899-kube-api-access-d9xwk\") pod \"cilium-zg9sl\" (UID: \"99b0eb50-0be4-41e9-9d10-2fc355896899\") " pod="kube-system/cilium-zg9sl" Aug 13 00:11:29.759733 kubelet[2606]: I0813 00:11:29.759627 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99b0eb50-0be4-41e9-9d10-2fc355896899-xtables-lock\") pod \"cilium-zg9sl\" (UID: \"99b0eb50-0be4-41e9-9d10-2fc355896899\") " pod="kube-system/cilium-zg9sl" Aug 13 00:11:29.759874 kubelet[2606]: I0813 00:11:29.759642 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/99b0eb50-0be4-41e9-9d10-2fc355896899-hubble-tls\") pod \"cilium-zg9sl\" (UID: \"99b0eb50-0be4-41e9-9d10-2fc355896899\") " pod="kube-system/cilium-zg9sl" Aug 13 00:11:29.784370 systemd[1]: Started sshd@59-172.234.216.155:22-139.178.89.65:52686.service - OpenSSH per-connection server daemon (139.178.89.65:52686). Aug 13 00:11:29.789319 kubelet[2606]: E0813 00:11:29.789201 2606 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:11:29.996889 kubelet[2606]: E0813 00:11:29.996418 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:11:29.997717 containerd[1483]: time="2025-08-13T00:11:29.997484216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zg9sl,Uid:99b0eb50-0be4-41e9-9d10-2fc355896899,Namespace:kube-system,Attempt:0,}" Aug 13 00:11:30.020909 containerd[1483]: time="2025-08-13T00:11:30.020559298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:11:30.020909 containerd[1483]: time="2025-08-13T00:11:30.020606288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:11:30.020909 containerd[1483]: time="2025-08-13T00:11:30.020617368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:30.021936 containerd[1483]: time="2025-08-13T00:11:30.021865838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:11:30.040257 systemd[1]: Started cri-containerd-96fd4e10f9d28ce4afe7e2420c1127599981202645d487ebbae6e72aafc32c4a.scope - libcontainer container 96fd4e10f9d28ce4afe7e2420c1127599981202645d487ebbae6e72aafc32c4a. Aug 13 00:11:30.062700 containerd[1483]: time="2025-08-13T00:11:30.062668610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zg9sl,Uid:99b0eb50-0be4-41e9-9d10-2fc355896899,Namespace:kube-system,Attempt:0,} returns sandbox id \"96fd4e10f9d28ce4afe7e2420c1127599981202645d487ebbae6e72aafc32c4a\"" Aug 13 00:11:30.063893 kubelet[2606]: E0813 00:11:30.063276 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:11:30.065504 containerd[1483]: time="2025-08-13T00:11:30.065480011Z" level=info msg="CreateContainer within sandbox \"96fd4e10f9d28ce4afe7e2420c1127599981202645d487ebbae6e72aafc32c4a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:11:30.073938 containerd[1483]: time="2025-08-13T00:11:30.073915566Z" level=info msg="CreateContainer within sandbox \"96fd4e10f9d28ce4afe7e2420c1127599981202645d487ebbae6e72aafc32c4a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2e0439a527b12dfaa1039ebcefa0d11f17c6e9203a2a36b4a07d1dc173c793f9\"" Aug 13 00:11:30.074457 containerd[1483]: time="2025-08-13T00:11:30.074439886Z" level=info msg="StartContainer for \"2e0439a527b12dfaa1039ebcefa0d11f17c6e9203a2a36b4a07d1dc173c793f9\"" Aug 13 00:11:30.107250 systemd[1]: Started cri-containerd-2e0439a527b12dfaa1039ebcefa0d11f17c6e9203a2a36b4a07d1dc173c793f9.scope - libcontainer container 2e0439a527b12dfaa1039ebcefa0d11f17c6e9203a2a36b4a07d1dc173c793f9. Aug 13 00:11:30.120093 sshd[4889]: Accepted publickey for core from 139.178.89.65 port 52686 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:11:30.122298 sshd-session[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:11:30.126927 systemd-logind[1455]: New session 57 of user core. Aug 13 00:11:30.133794 systemd[1]: Started session-57.scope - Session 57 of User core. Aug 13 00:11:30.139842 containerd[1483]: time="2025-08-13T00:11:30.139798170Z" level=info msg="StartContainer for \"2e0439a527b12dfaa1039ebcefa0d11f17c6e9203a2a36b4a07d1dc173c793f9\" returns successfully" Aug 13 00:11:30.153490 systemd[1]: cri-containerd-2e0439a527b12dfaa1039ebcefa0d11f17c6e9203a2a36b4a07d1dc173c793f9.scope: Deactivated successfully. Aug 13 00:11:30.177967 containerd[1483]: time="2025-08-13T00:11:30.177902050Z" level=info msg="shim disconnected" id=2e0439a527b12dfaa1039ebcefa0d11f17c6e9203a2a36b4a07d1dc173c793f9 namespace=k8s.io Aug 13 00:11:30.177967 containerd[1483]: time="2025-08-13T00:11:30.177965131Z" level=warning msg="cleaning up after shim disconnected" id=2e0439a527b12dfaa1039ebcefa0d11f17c6e9203a2a36b4a07d1dc173c793f9 namespace=k8s.io Aug 13 00:11:30.178168 containerd[1483]: time="2025-08-13T00:11:30.177974631Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:11:30.192296 containerd[1483]: time="2025-08-13T00:11:30.192256988Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:11:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 00:11:30.369250 sshd[4968]: Connection closed by 139.178.89.65 port 52686 Aug 13 00:11:30.369881 sshd-session[4889]: pam_unix(sshd:session): session closed for user core Aug 13 00:11:30.374743 systemd-logind[1455]: Session 57 logged out. Waiting for processes to exit. Aug 13 00:11:30.375056 systemd[1]: sshd@59-172.234.216.155:22-139.178.89.65:52686.service: Deactivated successfully. Aug 13 00:11:30.377871 systemd[1]: session-57.scope: Deactivated successfully. Aug 13 00:11:30.379454 systemd-logind[1455]: Removed session 57. Aug 13 00:11:30.393247 kubelet[2606]: E0813 00:11:30.392475 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:11:30.396026 containerd[1483]: time="2025-08-13T00:11:30.395207125Z" level=info msg="CreateContainer within sandbox \"96fd4e10f9d28ce4afe7e2420c1127599981202645d487ebbae6e72aafc32c4a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:11:30.405747 containerd[1483]: time="2025-08-13T00:11:30.405704730Z" level=info msg="CreateContainer within sandbox \"96fd4e10f9d28ce4afe7e2420c1127599981202645d487ebbae6e72aafc32c4a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"df79b6e65f5c1eb759af72d0a3275e3db542c608d98c8a01c0bec2de11e671b4\"" Aug 13 00:11:30.406935 containerd[1483]: time="2025-08-13T00:11:30.406074820Z" level=info msg="StartContainer for \"df79b6e65f5c1eb759af72d0a3275e3db542c608d98c8a01c0bec2de11e671b4\"" Aug 13 00:11:30.433346 systemd[1]: Started sshd@60-172.234.216.155:22-139.178.89.65:52694.service - OpenSSH per-connection server daemon (139.178.89.65:52694). Aug 13 00:11:30.439206 systemd[1]: Started cri-containerd-df79b6e65f5c1eb759af72d0a3275e3db542c608d98c8a01c0bec2de11e671b4.scope - libcontainer container df79b6e65f5c1eb759af72d0a3275e3db542c608d98c8a01c0bec2de11e671b4. Aug 13 00:11:30.467033 containerd[1483]: time="2025-08-13T00:11:30.466923712Z" level=info msg="StartContainer for \"df79b6e65f5c1eb759af72d0a3275e3db542c608d98c8a01c0bec2de11e671b4\" returns successfully" Aug 13 00:11:30.476629 systemd[1]: cri-containerd-df79b6e65f5c1eb759af72d0a3275e3db542c608d98c8a01c0bec2de11e671b4.scope: Deactivated successfully. Aug 13 00:11:30.499350 containerd[1483]: time="2025-08-13T00:11:30.499278669Z" level=info msg="shim disconnected" id=df79b6e65f5c1eb759af72d0a3275e3db542c608d98c8a01c0bec2de11e671b4 namespace=k8s.io Aug 13 00:11:30.499350 containerd[1483]: time="2025-08-13T00:11:30.499343029Z" level=warning msg="cleaning up after shim disconnected" id=df79b6e65f5c1eb759af72d0a3275e3db542c608d98c8a01c0bec2de11e671b4 namespace=k8s.io Aug 13 00:11:30.499350 containerd[1483]: time="2025-08-13T00:11:30.499351659Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:11:30.767830 sshd[5025]: Accepted publickey for core from 139.178.89.65 port 52694 ssh2: RSA SHA256:trvsTpDZo8JHtNj6ZqDP8H8EwRH6NK9UYePSAOdL8CE Aug 13 00:11:30.769002 sshd-session[5025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:11:30.773746 systemd-logind[1455]: New session 58 of user core. Aug 13 00:11:30.778268 systemd[1]: Started session-58.scope - Session 58 of User core. Aug 13 00:11:30.962552 kubelet[2606]: I0813 00:11:30.962516 2606 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:11:30.962897 kubelet[2606]: I0813 00:11:30.962562 2606 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:11:30.963761 kubelet[2606]: I0813 00:11:30.963738 2606 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:11:30.964999 kubelet[2606]: I0813 00:11:30.964697 2606 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c" size=18897442 runtimeHandler="" Aug 13 00:11:30.965149 containerd[1483]: time="2025-08-13T00:11:30.964812354Z" level=info msg="RemoveImage \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 00:11:30.966102 containerd[1483]: time="2025-08-13T00:11:30.965864045Z" level=info msg="ImageDelete event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:11:30.966589 containerd[1483]: time="2025-08-13T00:11:30.966521505Z" level=info msg="ImageDelete event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 00:11:30.986019 containerd[1483]: time="2025-08-13T00:11:30.985968955Z" level=info msg="RemoveImage \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" returns successfully" Aug 13 00:11:31.010759 kubelet[2606]: I0813 00:11:31.010715 2606 eviction_manager.go:383] "Eviction manager: able to reduce resource pressure without evicting pods." resourceName="ephemeral-storage" Aug 13 00:11:31.396265 kubelet[2606]: E0813 00:11:31.396230 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:11:31.399012 containerd[1483]: time="2025-08-13T00:11:31.398961818Z" level=info msg="CreateContainer within sandbox \"96fd4e10f9d28ce4afe7e2420c1127599981202645d487ebbae6e72aafc32c4a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:11:31.416629 containerd[1483]: time="2025-08-13T00:11:31.416593797Z" level=info msg="CreateContainer within sandbox \"96fd4e10f9d28ce4afe7e2420c1127599981202645d487ebbae6e72aafc32c4a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0d2db089001856d04cea6f555e004c141d24c7fd368ab40b56c73a849dc8064d\"" Aug 13 00:11:31.418211 containerd[1483]: time="2025-08-13T00:11:31.417264117Z" level=info msg="StartContainer for \"0d2db089001856d04cea6f555e004c141d24c7fd368ab40b56c73a849dc8064d\"" Aug 13 00:11:31.417619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3977251807.mount: Deactivated successfully. Aug 13 00:11:31.461265 systemd[1]: Started cri-containerd-0d2db089001856d04cea6f555e004c141d24c7fd368ab40b56c73a849dc8064d.scope - libcontainer container 0d2db089001856d04cea6f555e004c141d24c7fd368ab40b56c73a849dc8064d. Aug 13 00:11:31.524850 containerd[1483]: time="2025-08-13T00:11:31.524818792Z" level=info msg="StartContainer for \"0d2db089001856d04cea6f555e004c141d24c7fd368ab40b56c73a849dc8064d\" returns successfully" Aug 13 00:11:31.536576 systemd[1]: cri-containerd-0d2db089001856d04cea6f555e004c141d24c7fd368ab40b56c73a849dc8064d.scope: Deactivated successfully. Aug 13 00:11:31.558937 containerd[1483]: time="2025-08-13T00:11:31.558851350Z" level=info msg="shim disconnected" id=0d2db089001856d04cea6f555e004c141d24c7fd368ab40b56c73a849dc8064d namespace=k8s.io Aug 13 00:11:31.558937 containerd[1483]: time="2025-08-13T00:11:31.558927390Z" level=warning msg="cleaning up after shim disconnected" id=0d2db089001856d04cea6f555e004c141d24c7fd368ab40b56c73a849dc8064d namespace=k8s.io Aug 13 00:11:31.558937 containerd[1483]: time="2025-08-13T00:11:31.558936540Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:11:31.571971 containerd[1483]: time="2025-08-13T00:11:31.571931637Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:11:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 00:11:31.868485 systemd[1]: run-containerd-runc-k8s.io-0d2db089001856d04cea6f555e004c141d24c7fd368ab40b56c73a849dc8064d-runc.dmdB1l.mount: Deactivated successfully. Aug 13 00:11:31.868600 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d2db089001856d04cea6f555e004c141d24c7fd368ab40b56c73a849dc8064d-rootfs.mount: Deactivated successfully. Aug 13 00:11:32.400664 kubelet[2606]: E0813 00:11:32.400631 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:11:32.407166 containerd[1483]: time="2025-08-13T00:11:32.403989030Z" level=info msg="CreateContainer within sandbox \"96fd4e10f9d28ce4afe7e2420c1127599981202645d487ebbae6e72aafc32c4a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:11:32.422707 containerd[1483]: time="2025-08-13T00:11:32.422666959Z" level=info msg="CreateContainer within sandbox \"96fd4e10f9d28ce4afe7e2420c1127599981202645d487ebbae6e72aafc32c4a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"aaa931ccd7f7aea4081afc2bf864796ca0aad434ec31e73760b9e8d8402c7407\"" Aug 13 00:11:32.423653 containerd[1483]: time="2025-08-13T00:11:32.423524250Z" level=info msg="StartContainer for \"aaa931ccd7f7aea4081afc2bf864796ca0aad434ec31e73760b9e8d8402c7407\"" Aug 13 00:11:32.460298 systemd[1]: Started cri-containerd-aaa931ccd7f7aea4081afc2bf864796ca0aad434ec31e73760b9e8d8402c7407.scope - libcontainer container aaa931ccd7f7aea4081afc2bf864796ca0aad434ec31e73760b9e8d8402c7407. Aug 13 00:11:32.488820 systemd[1]: cri-containerd-aaa931ccd7f7aea4081afc2bf864796ca0aad434ec31e73760b9e8d8402c7407.scope: Deactivated successfully. Aug 13 00:11:32.491014 containerd[1483]: time="2025-08-13T00:11:32.490574503Z" level=info msg="StartContainer for \"aaa931ccd7f7aea4081afc2bf864796ca0aad434ec31e73760b9e8d8402c7407\" returns successfully" Aug 13 00:11:32.511476 containerd[1483]: time="2025-08-13T00:11:32.511415604Z" level=info msg="shim disconnected" id=aaa931ccd7f7aea4081afc2bf864796ca0aad434ec31e73760b9e8d8402c7407 namespace=k8s.io Aug 13 00:11:32.511476 containerd[1483]: time="2025-08-13T00:11:32.511470904Z" level=warning msg="cleaning up after shim disconnected" id=aaa931ccd7f7aea4081afc2bf864796ca0aad434ec31e73760b9e8d8402c7407 namespace=k8s.io Aug 13 00:11:32.511476 containerd[1483]: time="2025-08-13T00:11:32.511479864Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:11:32.868623 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aaa931ccd7f7aea4081afc2bf864796ca0aad434ec31e73760b9e8d8402c7407-rootfs.mount: Deactivated successfully. Aug 13 00:11:33.404088 kubelet[2606]: E0813 00:11:33.404060 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:11:33.406808 containerd[1483]: time="2025-08-13T00:11:33.406687040Z" level=info msg="CreateContainer within sandbox \"96fd4e10f9d28ce4afe7e2420c1127599981202645d487ebbae6e72aafc32c4a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:11:33.431570 containerd[1483]: time="2025-08-13T00:11:33.431522702Z" level=info msg="CreateContainer within sandbox \"96fd4e10f9d28ce4afe7e2420c1127599981202645d487ebbae6e72aafc32c4a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7d50fb7635e687db9de8f507b6bfb2b27ce1bef38a27809802feba9af1d68dbd\"" Aug 13 00:11:33.432404 containerd[1483]: time="2025-08-13T00:11:33.431997282Z" level=info msg="StartContainer for \"7d50fb7635e687db9de8f507b6bfb2b27ce1bef38a27809802feba9af1d68dbd\"" Aug 13 00:11:33.464008 systemd[1]: run-containerd-runc-k8s.io-7d50fb7635e687db9de8f507b6bfb2b27ce1bef38a27809802feba9af1d68dbd-runc.vtm4pu.mount: Deactivated successfully. Aug 13 00:11:33.477247 systemd[1]: Started cri-containerd-7d50fb7635e687db9de8f507b6bfb2b27ce1bef38a27809802feba9af1d68dbd.scope - libcontainer container 7d50fb7635e687db9de8f507b6bfb2b27ce1bef38a27809802feba9af1d68dbd. Aug 13 00:11:33.512900 containerd[1483]: time="2025-08-13T00:11:33.512863822Z" level=info msg="StartContainer for \"7d50fb7635e687db9de8f507b6bfb2b27ce1bef38a27809802feba9af1d68dbd\" returns successfully" Aug 13 00:11:34.000795 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 00:11:34.408887 kubelet[2606]: E0813 00:11:34.408628 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:11:35.116709 systemd[1]: run-containerd-runc-k8s.io-7d50fb7635e687db9de8f507b6bfb2b27ce1bef38a27809802feba9af1d68dbd-runc.NgLW08.mount: Deactivated successfully. Aug 13 00:11:35.998353 kubelet[2606]: E0813 00:11:35.998029 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:11:36.946792 systemd-networkd[1386]: lxc_health: Link UP Aug 13 00:11:36.947756 systemd-networkd[1386]: lxc_health: Gained carrier Aug 13 00:11:37.235938 systemd[1]: run-containerd-runc-k8s.io-7d50fb7635e687db9de8f507b6bfb2b27ce1bef38a27809802feba9af1d68dbd-runc.byuuJI.mount: Deactivated successfully. Aug 13 00:11:37.636598 kubelet[2606]: E0813 00:11:37.635404 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:11:37.999850 kubelet[2606]: E0813 00:11:37.999437 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:11:38.031015 kubelet[2606]: I0813 00:11:38.030672 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zg9sl" podStartSLOduration=9.030659099 podStartE2EDuration="9.030659099s" podCreationTimestamp="2025-08-13 00:11:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:11:34.424826436 +0000 UTC m=+364.890455322" watchObservedRunningTime="2025-08-13 00:11:38.030659099 +0000 UTC m=+368.496287965" Aug 13 00:11:38.078396 systemd-networkd[1386]: lxc_health: Gained IPv6LL Aug 13 00:11:38.418145 kubelet[2606]: E0813 00:11:38.417485 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 00:11:43.649151 sshd[5069]: Connection closed by 139.178.89.65 port 52694 Aug 13 00:11:43.650065 sshd-session[5025]: pam_unix(sshd:session): session closed for user core Aug 13 00:11:43.655760 systemd[1]: sshd@60-172.234.216.155:22-139.178.89.65:52694.service: Deactivated successfully. Aug 13 00:11:43.658443 systemd[1]: session-58.scope: Deactivated successfully. Aug 13 00:11:43.659251 systemd-logind[1455]: Session 58 logged out. Waiting for processes to exit. Aug 13 00:11:43.660522 systemd-logind[1455]: Removed session 58.