Aug 13 00:46:31.936295 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:47:31 -00 2025 Aug 13 00:46:31.936318 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 13 00:46:31.936327 kernel: BIOS-provided physical RAM map: Aug 13 00:46:31.936333 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Aug 13 00:46:31.936338 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Aug 13 00:46:31.936347 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 00:46:31.936354 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Aug 13 00:46:31.936360 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Aug 13 00:46:31.936365 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 00:46:31.936371 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 00:46:31.936377 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 00:46:31.936383 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 00:46:31.936388 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Aug 13 00:46:31.936394 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 00:46:31.936404 kernel: NX (Execute Disable) protection: active Aug 13 00:46:31.936411 kernel: APIC: Static calls initialized Aug 13 00:46:31.936417 kernel: SMBIOS 2.8 present. Aug 13 00:46:31.936423 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Aug 13 00:46:31.936429 kernel: Hypervisor detected: KVM Aug 13 00:46:31.936438 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 00:46:31.936444 kernel: kvm-clock: using sched offset of 4567040990 cycles Aug 13 00:46:31.938506 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 00:46:31.938519 kernel: tsc: Detected 2000.000 MHz processor Aug 13 00:46:31.938526 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:46:31.938533 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:46:31.938539 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Aug 13 00:46:31.938546 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 00:46:31.938552 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:46:31.938564 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Aug 13 00:46:31.938570 kernel: Using GB pages for direct mapping Aug 13 00:46:31.938576 kernel: ACPI: Early table checksum verification disabled Aug 13 00:46:31.938583 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Aug 13 00:46:31.938589 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:31.938596 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:31.938602 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:31.938608 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 00:46:31.938615 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:31.938624 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:31.938630 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:31.938637 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:31.938647 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Aug 13 00:46:31.938653 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Aug 13 00:46:31.938660 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 00:46:31.938669 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Aug 13 00:46:31.938676 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Aug 13 00:46:31.938682 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Aug 13 00:46:31.938689 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Aug 13 00:46:31.938695 kernel: No NUMA configuration found Aug 13 00:46:31.938702 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Aug 13 00:46:31.938708 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] Aug 13 00:46:31.938715 kernel: Zone ranges: Aug 13 00:46:31.938721 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:46:31.938731 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 00:46:31.938738 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Aug 13 00:46:31.938744 kernel: Movable zone start for each node Aug 13 00:46:31.938750 kernel: Early memory node ranges Aug 13 00:46:31.938757 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 00:46:31.938763 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Aug 13 00:46:31.938770 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Aug 13 00:46:31.938777 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Aug 13 00:46:31.938783 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:46:31.938793 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 00:46:31.938799 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Aug 13 00:46:31.938805 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 00:46:31.938812 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 00:46:31.938818 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:46:31.938825 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 00:46:31.938831 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 00:46:31.938838 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:46:31.938844 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 00:46:31.938854 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 00:46:31.938861 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:46:31.938867 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 00:46:31.938874 kernel: TSC deadline timer available Aug 13 00:46:31.938880 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 00:46:31.938886 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 00:46:31.938893 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 00:46:31.938899 kernel: kvm-guest: setup PV sched yield Aug 13 00:46:31.938906 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 00:46:31.938916 kernel: Booting paravirtualized kernel on KVM Aug 13 00:46:31.938922 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:46:31.938929 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 00:46:31.938935 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 13 00:46:31.938942 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 13 00:46:31.938948 kernel: pcpu-alloc: [0] 0 1 Aug 13 00:46:31.938955 kernel: kvm-guest: PV spinlocks enabled Aug 13 00:46:31.938961 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 00:46:31.938969 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 13 00:46:31.938979 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:46:31.938985 kernel: random: crng init done Aug 13 00:46:31.938992 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:46:31.938998 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:46:31.939005 kernel: Fallback order for Node 0: 0 Aug 13 00:46:31.939011 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Aug 13 00:46:31.939018 kernel: Policy zone: Normal Aug 13 00:46:31.939024 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:46:31.939034 kernel: software IO TLB: area num 2. Aug 13 00:46:31.939041 kernel: Memory: 3964156K/4193772K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43504K init, 1572K bss, 229356K reserved, 0K cma-reserved) Aug 13 00:46:31.939047 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:46:31.939054 kernel: ftrace: allocating 37942 entries in 149 pages Aug 13 00:46:31.939060 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 00:46:31.939067 kernel: Dynamic Preempt: voluntary Aug 13 00:46:31.939073 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:46:31.939080 kernel: rcu: RCU event tracing is enabled. Aug 13 00:46:31.939087 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:46:31.939097 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:46:31.939104 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:46:31.939110 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:46:31.939117 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:46:31.939123 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:46:31.939130 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 00:46:31.939136 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:46:31.939143 kernel: Console: colour VGA+ 80x25 Aug 13 00:46:31.939149 kernel: printk: console [tty0] enabled Aug 13 00:46:31.939158 kernel: printk: console [ttyS0] enabled Aug 13 00:46:31.939165 kernel: ACPI: Core revision 20230628 Aug 13 00:46:31.939172 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 00:46:31.939178 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:46:31.939195 kernel: x2apic enabled Aug 13 00:46:31.939204 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 00:46:31.939211 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 00:46:31.939218 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 00:46:31.939225 kernel: kvm-guest: setup PV IPIs Aug 13 00:46:31.939231 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 00:46:31.939238 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 13 00:46:31.939245 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Aug 13 00:46:31.939255 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 00:46:31.939262 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 00:46:31.939269 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 00:46:31.939276 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:46:31.939282 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:46:31.939292 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:46:31.939299 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 00:46:31.939306 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 00:46:31.939313 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 00:46:31.939320 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 00:46:31.939327 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 00:46:31.939334 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 00:46:31.939340 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Aug 13 00:46:31.939350 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 00:46:31.939357 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:46:31.939364 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:46:31.939371 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:46:31.939377 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 13 00:46:31.939384 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:46:31.939391 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Aug 13 00:46:31.939398 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Aug 13 00:46:31.939405 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:46:31.939414 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:46:31.939421 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 00:46:31.939428 kernel: landlock: Up and running. Aug 13 00:46:31.939435 kernel: SELinux: Initializing. Aug 13 00:46:31.939442 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:46:31.939449 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:46:31.939485 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Aug 13 00:46:31.939493 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:46:31.939500 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:46:31.939511 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:46:31.939518 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 00:46:31.939525 kernel: ... version: 0 Aug 13 00:46:31.939531 kernel: ... bit width: 48 Aug 13 00:46:31.939538 kernel: ... generic registers: 6 Aug 13 00:46:31.939545 kernel: ... value mask: 0000ffffffffffff Aug 13 00:46:31.939552 kernel: ... max period: 00007fffffffffff Aug 13 00:46:31.939558 kernel: ... fixed-purpose events: 0 Aug 13 00:46:31.939565 kernel: ... event mask: 000000000000003f Aug 13 00:46:31.939575 kernel: signal: max sigframe size: 3376 Aug 13 00:46:31.939582 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:46:31.939589 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:46:31.939595 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:46:31.939602 kernel: smpboot: x86: Booting SMP configuration: Aug 13 00:46:31.939609 kernel: .... node #0, CPUs: #1 Aug 13 00:46:31.939615 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:46:31.939622 kernel: smpboot: Max logical packages: 1 Aug 13 00:46:31.939629 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Aug 13 00:46:31.939639 kernel: devtmpfs: initialized Aug 13 00:46:31.939645 kernel: x86/mm: Memory block size: 128MB Aug 13 00:46:31.939652 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:46:31.939659 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:46:31.939666 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:46:31.939673 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:46:31.939679 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:46:31.939686 kernel: audit: type=2000 audit(1755045991.725:1): state=initialized audit_enabled=0 res=1 Aug 13 00:46:31.939693 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:46:31.939703 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:46:31.939710 kernel: cpuidle: using governor menu Aug 13 00:46:31.939716 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:46:31.939723 kernel: dca service started, version 1.12.1 Aug 13 00:46:31.939730 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 13 00:46:31.939737 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 00:46:31.939744 kernel: PCI: Using configuration type 1 for base access Aug 13 00:46:31.939750 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:46:31.939757 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:46:31.939767 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:46:31.939774 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:46:31.939781 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:46:31.939788 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:46:31.939795 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:46:31.939801 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:46:31.939808 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:46:31.939815 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 00:46:31.939822 kernel: ACPI: Interpreter enabled Aug 13 00:46:31.939831 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 00:46:31.939838 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:46:31.939845 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:46:31.939851 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 00:46:31.939858 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 00:46:31.939865 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:46:31.940048 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:46:31.940178 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 00:46:31.940303 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 00:46:31.940313 kernel: PCI host bridge to bus 0000:00 Aug 13 00:46:31.940435 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 00:46:31.940576 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 00:46:31.940685 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 00:46:31.940792 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 13 00:46:31.940898 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 00:46:31.941012 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Aug 13 00:46:31.941118 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:46:31.941251 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 13 00:46:31.941378 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Aug 13 00:46:31.942045 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Aug 13 00:46:31.942169 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Aug 13 00:46:31.942291 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Aug 13 00:46:31.942404 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 00:46:31.942570 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Aug 13 00:46:31.942696 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Aug 13 00:46:31.942811 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Aug 13 00:46:31.942928 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 00:46:31.943064 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 13 00:46:31.943187 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Aug 13 00:46:31.943302 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Aug 13 00:46:31.943415 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 00:46:31.943578 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Aug 13 00:46:31.943705 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 13 00:46:31.943820 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 00:46:31.943944 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 13 00:46:31.944068 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Aug 13 00:46:31.944199 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Aug 13 00:46:31.944333 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 13 00:46:31.944448 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Aug 13 00:46:31.944504 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 00:46:31.944512 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 00:46:31.944519 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 00:46:31.944541 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 00:46:31.944548 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 00:46:31.944555 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 00:46:31.944561 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 00:46:31.944568 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 00:46:31.944575 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 00:46:31.944582 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 00:46:31.944588 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 00:46:31.944595 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 00:46:31.944605 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 00:46:31.944612 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 00:46:31.944619 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 00:46:31.944626 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 00:46:31.944632 kernel: iommu: Default domain type: Translated Aug 13 00:46:31.944639 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:46:31.944646 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:46:31.944653 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 00:46:31.944660 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Aug 13 00:46:31.944670 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Aug 13 00:46:31.944789 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 00:46:31.945015 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 00:46:31.945126 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 00:46:31.945135 kernel: vgaarb: loaded Aug 13 00:46:31.945143 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 00:46:31.945149 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 00:46:31.945156 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 00:46:31.945167 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:46:31.945174 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:46:31.945181 kernel: pnp: PnP ACPI init Aug 13 00:46:31.945302 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 00:46:31.945312 kernel: pnp: PnP ACPI: found 5 devices Aug 13 00:46:31.945319 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:46:31.945326 kernel: NET: Registered PF_INET protocol family Aug 13 00:46:31.945333 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:46:31.945344 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:46:31.945351 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:46:31.945358 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:46:31.945365 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 00:46:31.945372 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:46:31.945379 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:46:31.945386 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:46:31.945393 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:46:31.945400 kernel: NET: Registered PF_XDP protocol family Aug 13 00:46:31.945545 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 00:46:31.945651 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 00:46:31.945915 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 00:46:31.946017 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 13 00:46:31.946119 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 00:46:31.946220 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Aug 13 00:46:31.946229 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:46:31.946236 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 00:46:31.946248 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Aug 13 00:46:31.946255 kernel: Initialise system trusted keyrings Aug 13 00:46:31.946262 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:46:31.946269 kernel: Key type asymmetric registered Aug 13 00:46:31.946275 kernel: Asymmetric key parser 'x509' registered Aug 13 00:46:31.946282 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 00:46:31.946289 kernel: io scheduler mq-deadline registered Aug 13 00:46:31.946296 kernel: io scheduler kyber registered Aug 13 00:46:31.946303 kernel: io scheduler bfq registered Aug 13 00:46:31.946309 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:46:31.946320 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 00:46:31.946327 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 00:46:31.946334 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:46:31.946340 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:46:31.946348 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 00:46:31.946354 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 00:46:31.946361 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 00:46:31.946368 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 00:46:31.946546 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 00:46:31.946664 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 00:46:31.946770 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T00:46:31 UTC (1755045991) Aug 13 00:46:31.946875 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 00:46:31.946884 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 00:46:31.946891 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:46:31.946898 kernel: Segment Routing with IPv6 Aug 13 00:46:31.946905 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:46:31.946915 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:46:31.946922 kernel: Key type dns_resolver registered Aug 13 00:46:31.946929 kernel: IPI shorthand broadcast: enabled Aug 13 00:46:31.946936 kernel: sched_clock: Marking stable (673002750, 200690780)->(912814880, -39121350) Aug 13 00:46:31.946943 kernel: registered taskstats version 1 Aug 13 00:46:31.946950 kernel: Loading compiled-in X.509 certificates Aug 13 00:46:31.946956 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: dfd2b306eb54324ea79eea0261f8d493924aeeeb' Aug 13 00:46:31.946963 kernel: Key type .fscrypt registered Aug 13 00:46:31.946970 kernel: Key type fscrypt-provisioning registered Aug 13 00:46:31.946980 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:46:31.946987 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:46:31.946994 kernel: ima: No architecture policies found Aug 13 00:46:31.947001 kernel: clk: Disabling unused clocks Aug 13 00:46:31.947007 kernel: Freeing unused kernel image (initmem) memory: 43504K Aug 13 00:46:31.947014 kernel: Write protecting the kernel read-only data: 38912k Aug 13 00:46:31.947021 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Aug 13 00:46:31.947028 kernel: Run /init as init process Aug 13 00:46:31.947035 kernel: with arguments: Aug 13 00:46:31.947044 kernel: /init Aug 13 00:46:31.947051 kernel: with environment: Aug 13 00:46:31.947057 kernel: HOME=/ Aug 13 00:46:31.947064 kernel: TERM=linux Aug 13 00:46:31.947071 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:46:31.947079 systemd[1]: Successfully made /usr/ read-only. Aug 13 00:46:31.947089 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:46:31.947097 systemd[1]: Detected virtualization kvm. Aug 13 00:46:31.947107 systemd[1]: Detected architecture x86-64. Aug 13 00:46:31.947114 systemd[1]: Running in initrd. Aug 13 00:46:31.947122 systemd[1]: No hostname configured, using default hostname. Aug 13 00:46:31.947129 systemd[1]: Hostname set to . Aug 13 00:46:31.947137 systemd[1]: Initializing machine ID from random generator. Aug 13 00:46:31.947162 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:46:31.947174 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:46:31.947181 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:46:31.947189 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:46:31.947197 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:46:31.947205 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:46:31.947213 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:46:31.947221 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:46:31.947231 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:46:31.947239 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:46:31.947246 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:46:31.947254 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:46:31.947261 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:46:31.947269 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:46:31.947276 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:46:31.947284 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:46:31.947293 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:46:31.947301 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:46:31.947308 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 00:46:31.947316 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:46:31.947324 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:46:31.947331 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:46:31.947338 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:46:31.947346 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:46:31.947353 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:46:31.947364 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:46:31.947371 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:46:31.947379 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:46:31.947386 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:46:31.947394 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:46:31.947401 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:46:31.947429 systemd-journald[177]: Collecting audit messages is disabled. Aug 13 00:46:31.947463 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:46:31.947489 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:46:31.947498 systemd-journald[177]: Journal started Aug 13 00:46:31.947515 systemd-journald[177]: Runtime Journal (/run/log/journal/cd571a55601e4713b37c54721d07a065) is 8M, max 78.3M, 70.3M free. Aug 13 00:46:31.948186 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:46:31.931006 systemd-modules-load[178]: Inserted module 'overlay' Aug 13 00:46:32.002984 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:46:32.002999 kernel: Bridge firewalling registered Aug 13 00:46:32.003009 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:46:31.969593 systemd-modules-load[178]: Inserted module 'br_netfilter' Aug 13 00:46:32.003863 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:46:32.004914 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:46:32.006099 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:46:32.014607 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:46:32.016660 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:46:32.024592 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:46:32.051713 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:46:32.056572 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:46:32.061028 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:46:32.063805 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:46:32.064706 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:46:32.071634 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:46:32.073757 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:46:32.085653 dracut-cmdline[212]: dracut-dracut-053 Aug 13 00:46:32.089115 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 13 00:46:32.115765 systemd-resolved[213]: Positive Trust Anchors: Aug 13 00:46:32.116525 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:46:32.117343 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:46:32.120651 systemd-resolved[213]: Defaulting to hostname 'linux'. Aug 13 00:46:32.121998 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:46:32.122636 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:46:32.175495 kernel: SCSI subsystem initialized Aug 13 00:46:32.184492 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:46:32.195496 kernel: iscsi: registered transport (tcp) Aug 13 00:46:32.215752 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:46:32.215800 kernel: QLogic iSCSI HBA Driver Aug 13 00:46:32.272989 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:46:32.279595 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:46:32.306665 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:46:32.306764 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:46:32.309746 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 00:46:32.352483 kernel: raid6: avx2x4 gen() 30744 MB/s Aug 13 00:46:32.370480 kernel: raid6: avx2x2 gen() 28568 MB/s Aug 13 00:46:32.388909 kernel: raid6: avx2x1 gen() 20701 MB/s Aug 13 00:46:32.388929 kernel: raid6: using algorithm avx2x4 gen() 30744 MB/s Aug 13 00:46:32.408080 kernel: raid6: .... xor() 4819 MB/s, rmw enabled Aug 13 00:46:32.408110 kernel: raid6: using avx2x2 recovery algorithm Aug 13 00:46:32.427497 kernel: xor: automatically using best checksumming function avx Aug 13 00:46:32.553492 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:46:32.568821 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:46:32.575650 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:46:32.607951 systemd-udevd[396]: Using default interface naming scheme 'v255'. Aug 13 00:46:32.613124 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:46:32.619581 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:46:32.636764 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Aug 13 00:46:32.672791 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:46:32.677674 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:46:32.737543 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:46:32.744614 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:46:32.757221 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:46:32.761828 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:46:32.762672 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:46:32.765362 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:46:32.771619 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:46:32.787931 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:46:32.810598 kernel: scsi host0: Virtio SCSI HBA Aug 13 00:46:32.958473 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Aug 13 00:46:32.989491 kernel: libata version 3.00 loaded. Aug 13 00:46:32.992480 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:46:33.015060 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:46:33.016069 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:46:33.016776 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:46:33.017304 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:46:33.017409 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:46:33.032968 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 00:46:33.033168 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 00:46:33.033180 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 00:46:33.033190 kernel: AES CTR mode by8 optimization enabled Aug 13 00:46:33.033199 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 13 00:46:33.033341 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 00:46:33.033503 kernel: scsi host1: ahci Aug 13 00:46:33.019547 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:46:33.037310 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:46:33.064482 kernel: scsi host2: ahci Aug 13 00:46:33.067476 kernel: scsi host3: ahci Aug 13 00:46:33.068477 kernel: scsi host4: ahci Aug 13 00:46:33.071486 kernel: sd 0:0:0:0: Power-on or device reset occurred Aug 13 00:46:33.071692 kernel: sd 0:0:0:0: [sda] 9297920 512-byte logical blocks: (4.76 GB/4.43 GiB) Aug 13 00:46:33.071836 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 00:46:33.071981 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Aug 13 00:46:33.072120 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 00:46:33.072482 kernel: scsi host5: ahci Aug 13 00:46:33.074569 kernel: scsi host6: ahci Aug 13 00:46:33.074736 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Aug 13 00:46:33.074748 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Aug 13 00:46:33.074757 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Aug 13 00:46:33.074766 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Aug 13 00:46:33.074776 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Aug 13 00:46:33.074784 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Aug 13 00:46:33.074798 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:46:33.074807 kernel: GPT:9289727 != 9297919 Aug 13 00:46:33.074816 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:46:33.074825 kernel: GPT:9289727 != 9297919 Aug 13 00:46:33.074833 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:46:33.074842 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:46:33.074852 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 00:46:33.134506 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:46:33.157880 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:46:33.174522 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:46:33.384569 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 00:46:33.384623 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 00:46:33.384642 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 00:46:33.384651 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 00:46:33.386041 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 00:46:33.386472 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 00:46:33.425473 kernel: BTRFS: device fsid 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 devid 1 transid 45 /dev/sda3 scanned by (udev-worker) (465) Aug 13 00:46:33.430468 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (458) Aug 13 00:46:33.431202 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Aug 13 00:46:33.443826 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Aug 13 00:46:33.464016 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 00:46:33.470770 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Aug 13 00:46:33.471354 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Aug 13 00:46:33.483575 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:46:33.488662 disk-uuid[566]: Primary Header is updated. Aug 13 00:46:33.488662 disk-uuid[566]: Secondary Entries is updated. Aug 13 00:46:33.488662 disk-uuid[566]: Secondary Header is updated. Aug 13 00:46:33.493479 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:46:33.498466 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:46:34.502536 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:46:34.503666 disk-uuid[567]: The operation has completed successfully. Aug 13 00:46:34.555517 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:46:34.555631 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:46:34.597594 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:46:34.600593 sh[581]: Success Aug 13 00:46:34.613480 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 13 00:46:34.656539 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:46:34.665021 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:46:34.665754 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:46:34.683781 kernel: BTRFS info (device dm-0): first mount of filesystem 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 Aug 13 00:46:34.683807 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:46:34.685883 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 00:46:34.689100 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 00:46:34.689121 kernel: BTRFS info (device dm-0): using free space tree Aug 13 00:46:34.697474 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 13 00:46:34.698727 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:46:34.699653 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:46:34.704561 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:46:34.706578 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:46:34.727185 kernel: BTRFS info (device sda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:46:34.727210 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:46:34.727220 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:46:34.732130 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 00:46:34.732152 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:46:34.738484 kernel: BTRFS info (device sda6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:46:34.740137 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:46:34.748598 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:46:34.828561 ignition[676]: Ignition 2.20.0 Aug 13 00:46:34.828572 ignition[676]: Stage: fetch-offline Aug 13 00:46:34.828608 ignition[676]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:34.828619 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:46:34.828689 ignition[676]: parsed url from cmdline: "" Aug 13 00:46:34.828693 ignition[676]: no config URL provided Aug 13 00:46:34.828698 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:46:34.831645 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:46:34.828707 ignition[676]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:46:34.832931 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:46:34.828712 ignition[676]: failed to fetch config: resource requires networking Aug 13 00:46:34.828867 ignition[676]: Ignition finished successfully Aug 13 00:46:34.839601 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:46:34.864996 systemd-networkd[764]: lo: Link UP Aug 13 00:46:34.865635 systemd-networkd[764]: lo: Gained carrier Aug 13 00:46:34.867154 systemd-networkd[764]: Enumeration completed Aug 13 00:46:34.867217 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:46:34.867831 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:46:34.867835 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:46:34.868965 systemd-networkd[764]: eth0: Link UP Aug 13 00:46:34.868969 systemd-networkd[764]: eth0: Gained carrier Aug 13 00:46:34.868976 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:46:34.869624 systemd[1]: Reached target network.target - Network. Aug 13 00:46:34.877628 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 00:46:34.892337 ignition[767]: Ignition 2.20.0 Aug 13 00:46:34.892352 ignition[767]: Stage: fetch Aug 13 00:46:34.892584 ignition[767]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:34.892601 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:46:34.892714 ignition[767]: parsed url from cmdline: "" Aug 13 00:46:34.892721 ignition[767]: no config URL provided Aug 13 00:46:34.892729 ignition[767]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:46:34.892743 ignition[767]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:46:34.892772 ignition[767]: PUT http://169.254.169.254/v1/token: attempt #1 Aug 13 00:46:34.892958 ignition[767]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 00:46:35.093549 ignition[767]: PUT http://169.254.169.254/v1/token: attempt #2 Aug 13 00:46:35.093712 ignition[767]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 00:46:35.344527 systemd-networkd[764]: eth0: DHCPv4 address 172.234.29.142/24, gateway 172.234.29.1 acquired from 23.40.197.124 Aug 13 00:46:35.493828 ignition[767]: PUT http://169.254.169.254/v1/token: attempt #3 Aug 13 00:46:35.599310 ignition[767]: PUT result: OK Aug 13 00:46:35.599356 ignition[767]: GET http://169.254.169.254/v1/user-data: attempt #1 Aug 13 00:46:35.728926 ignition[767]: GET result: OK Aug 13 00:46:35.729002 ignition[767]: parsing config with SHA512: be0ce71503580a861e20a3c7c7df646a4cc2cd5dec4d848f013c355a8b11173f83aea3f3fedcfa00aee861e20b707aa2fc9bc797419c4b6fe190182c8745ffc2 Aug 13 00:46:35.732225 unknown[767]: fetched base config from "system" Aug 13 00:46:35.732234 unknown[767]: fetched base config from "system" Aug 13 00:46:35.732725 ignition[767]: fetch: fetch complete Aug 13 00:46:35.732240 unknown[767]: fetched user config from "akamai" Aug 13 00:46:35.732730 ignition[767]: fetch: fetch passed Aug 13 00:46:35.735989 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 00:46:35.732810 ignition[767]: Ignition finished successfully Aug 13 00:46:35.745597 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:46:35.758132 ignition[775]: Ignition 2.20.0 Aug 13 00:46:35.758142 ignition[775]: Stage: kargs Aug 13 00:46:35.760075 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:46:35.758264 ignition[775]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:35.758274 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:46:35.758932 ignition[775]: kargs: kargs passed Aug 13 00:46:35.758966 ignition[775]: Ignition finished successfully Aug 13 00:46:35.778824 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:46:35.787776 ignition[781]: Ignition 2.20.0 Aug 13 00:46:35.787786 ignition[781]: Stage: disks Aug 13 00:46:35.787914 ignition[781]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:35.787924 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:46:35.788652 ignition[781]: disks: disks passed Aug 13 00:46:35.789736 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:46:35.788688 ignition[781]: Ignition finished successfully Aug 13 00:46:35.812432 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:46:35.813506 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:46:35.814533 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:46:35.815696 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:46:35.816859 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:46:35.822562 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:46:35.836597 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 00:46:35.839664 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:46:35.844544 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:46:35.915495 kernel: EXT4-fs (sda9): mounted filesystem 27db109b-2440-48a3-909e-fd8973275523 r/w with ordered data mode. Quota mode: none. Aug 13 00:46:35.915743 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:46:35.916795 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:46:35.923524 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:46:35.926584 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:46:35.927513 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 00:46:35.927559 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:46:35.927631 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:46:35.935335 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:46:35.942541 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (799) Aug 13 00:46:35.942561 kernel: BTRFS info (device sda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:46:35.942572 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:46:35.942581 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:46:35.946323 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 00:46:35.946346 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:46:35.951568 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:46:35.954641 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:46:35.994947 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:46:35.999566 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:46:36.004809 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:46:36.009323 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:46:36.095802 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:46:36.101541 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:46:36.105549 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:46:36.109480 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:46:36.111731 kernel: BTRFS info (device sda6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:46:36.134478 ignition[912]: INFO : Ignition 2.20.0 Aug 13 00:46:36.134478 ignition[912]: INFO : Stage: mount Aug 13 00:46:36.136168 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:36.136168 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:46:36.140374 ignition[912]: INFO : mount: mount passed Aug 13 00:46:36.140374 ignition[912]: INFO : Ignition finished successfully Aug 13 00:46:36.138584 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:46:36.146576 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:46:36.148039 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:46:36.364726 systemd-networkd[764]: eth0: Gained IPv6LL Aug 13 00:46:36.920653 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:46:36.937479 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (923) Aug 13 00:46:36.941725 kernel: BTRFS info (device sda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:46:36.941752 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:46:36.941763 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:46:36.947784 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 00:46:36.947807 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:46:36.950215 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:46:36.972755 ignition[940]: INFO : Ignition 2.20.0 Aug 13 00:46:36.972755 ignition[940]: INFO : Stage: files Aug 13 00:46:36.974209 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:36.974209 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:46:36.974209 ignition[940]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:46:36.976555 ignition[940]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:46:36.976555 ignition[940]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:46:36.978036 ignition[940]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:46:36.978036 ignition[940]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:46:36.978036 ignition[940]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:46:36.977309 unknown[940]: wrote ssh authorized keys file for user: core Aug 13 00:46:36.981132 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 00:46:36.981132 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 13 00:46:37.147686 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:46:39.071703 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 00:46:39.073198 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:46:39.073198 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 00:46:39.203867 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:46:39.408344 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:46:39.408344 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:46:39.410858 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:46:39.410858 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:46:39.410858 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:46:39.410858 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:46:39.410858 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:46:39.410858 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:46:39.410858 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:46:39.410858 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:46:39.410858 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:46:39.410858 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:46:39.410858 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:46:39.410858 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:46:39.410858 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 00:46:39.838105 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 00:46:40.196349 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:46:40.196349 ignition[940]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 00:46:40.199888 ignition[940]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:46:40.199888 ignition[940]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:46:40.199888 ignition[940]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 00:46:40.199888 ignition[940]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 13 00:46:40.199888 ignition[940]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 00:46:40.199888 ignition[940]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 00:46:40.199888 ignition[940]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 13 00:46:40.199888 ignition[940]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:46:40.199888 ignition[940]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:46:40.199888 ignition[940]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:46:40.199888 ignition[940]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:46:40.199888 ignition[940]: INFO : files: files passed Aug 13 00:46:40.199888 ignition[940]: INFO : Ignition finished successfully Aug 13 00:46:40.200228 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:46:40.213628 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:46:40.217311 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:46:40.218875 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:46:40.218980 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:46:40.235032 initrd-setup-root-after-ignition[969]: grep: Aug 13 00:46:40.235919 initrd-setup-root-after-ignition[973]: grep: Aug 13 00:46:40.235919 initrd-setup-root-after-ignition[969]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:46:40.235919 initrd-setup-root-after-ignition[969]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:46:40.238191 initrd-setup-root-after-ignition[973]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:46:40.237768 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:46:40.238985 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:46:40.251598 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:46:40.281676 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:46:40.282007 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:46:40.283740 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:46:40.284605 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:46:40.285902 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:46:40.291623 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:46:40.303818 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:46:40.311664 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:46:40.321218 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:46:40.321863 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:46:40.322493 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:46:40.323044 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:46:40.323152 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:46:40.324390 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:46:40.325203 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:46:40.326375 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:46:40.327443 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:46:40.328566 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:46:40.329810 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:46:40.330868 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:46:40.331983 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:46:40.333181 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:46:40.334442 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:46:40.335584 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:46:40.335688 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:46:40.337353 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:46:40.338129 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:46:40.339286 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:46:40.339386 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:46:40.340434 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:46:40.340587 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:46:40.341784 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:46:40.341892 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:46:40.342655 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:46:40.342772 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:46:40.351213 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:46:40.354677 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:46:40.355234 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:46:40.355381 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:46:40.357012 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:46:40.357151 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:46:40.363858 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:46:40.363971 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:46:40.371991 ignition[993]: INFO : Ignition 2.20.0 Aug 13 00:46:40.371991 ignition[993]: INFO : Stage: umount Aug 13 00:46:40.375529 ignition[993]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:40.375529 ignition[993]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:46:40.375529 ignition[993]: INFO : umount: umount passed Aug 13 00:46:40.375529 ignition[993]: INFO : Ignition finished successfully Aug 13 00:46:40.377720 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:46:40.377837 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:46:40.379615 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:46:40.379699 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:46:40.381813 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:46:40.381892 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:46:40.383942 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:46:40.384027 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 00:46:40.384609 systemd[1]: Stopped target network.target - Network. Aug 13 00:46:40.385070 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:46:40.385124 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:46:40.385693 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:46:40.408392 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:46:40.409569 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:46:40.410198 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:46:40.411236 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:46:40.412303 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:46:40.412350 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:46:40.413316 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:46:40.413357 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:46:40.414578 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:46:40.414632 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:46:40.415760 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:46:40.415807 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:46:40.416871 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:46:40.417965 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:46:40.420530 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:46:40.421135 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:46:40.421241 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:46:40.422714 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:46:40.422803 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:46:40.425560 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:46:40.425709 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:46:40.430850 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 00:46:40.432089 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:46:40.432211 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:46:40.434102 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 00:46:40.434706 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:46:40.434768 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:46:40.439587 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:46:40.440113 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:46:40.440165 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:46:40.441775 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:46:40.441837 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:46:40.443445 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:46:40.443513 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:46:40.444071 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:46:40.444145 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:46:40.445235 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:46:40.447753 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:46:40.447818 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:46:40.457379 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:46:40.457508 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:46:40.463100 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:46:40.463265 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:46:40.464488 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:46:40.464534 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:46:40.465502 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:46:40.465538 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:46:40.466606 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:46:40.466653 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:46:40.468202 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:46:40.468249 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:46:40.469400 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:46:40.469448 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:46:40.483585 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:46:40.484158 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:46:40.484223 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:46:40.484886 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 00:46:40.485121 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:46:40.485944 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:46:40.485991 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:46:40.487003 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:46:40.487049 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:46:40.489536 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 00:46:40.489601 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:46:40.494213 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:46:40.494307 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:46:40.495606 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:46:40.502842 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:46:40.509743 systemd[1]: Switching root. Aug 13 00:46:40.533477 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Aug 13 00:46:40.533519 systemd-journald[177]: Journal stopped Aug 13 00:46:41.561157 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:46:41.561189 kernel: SELinux: policy capability open_perms=1 Aug 13 00:46:41.561204 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:46:41.561218 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:46:41.561233 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:46:41.561251 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:46:41.561267 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:46:41.561281 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:46:41.561295 kernel: audit: type=1403 audit(1755046000.651:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:46:41.561311 systemd[1]: Successfully loaded SELinux policy in 45.834ms. Aug 13 00:46:41.561327 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.478ms. Aug 13 00:46:41.561343 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:46:41.561358 systemd[1]: Detected virtualization kvm. Aug 13 00:46:41.561373 systemd[1]: Detected architecture x86-64. Aug 13 00:46:41.561389 systemd[1]: Detected first boot. Aug 13 00:46:41.561405 systemd[1]: Initializing machine ID from random generator. Aug 13 00:46:41.561425 zram_generator::config[1038]: No configuration found. Aug 13 00:46:41.561442 kernel: Guest personality initialized and is inactive Aug 13 00:46:41.561475 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 00:46:41.561493 kernel: Initialized host personality Aug 13 00:46:41.561508 kernel: NET: Registered PF_VSOCK protocol family Aug 13 00:46:41.561525 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:46:41.561543 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 00:46:41.561559 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:46:41.561582 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 00:46:41.561600 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:46:41.561616 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:46:41.561633 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:46:41.561649 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:46:41.561666 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:46:41.561682 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:46:41.561702 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:46:41.561718 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:46:41.561727 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:46:41.561737 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:46:41.561747 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:46:41.561757 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:46:41.561766 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:46:41.561776 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:46:41.561788 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:46:41.561799 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 00:46:41.561811 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:46:41.561821 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 00:46:41.561831 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 00:46:41.561841 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 00:46:41.561851 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:46:41.561861 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:46:41.561873 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:46:41.561882 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:46:41.561892 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:46:41.561902 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:46:41.561911 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:46:41.561921 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 00:46:41.561931 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:46:41.561943 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:46:41.561953 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:46:41.561963 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:46:41.561973 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:46:41.561983 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:46:41.561995 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:46:41.562007 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:41.562017 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:46:41.562026 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:46:41.562036 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:46:41.562047 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:46:41.562057 systemd[1]: Reached target machines.target - Containers. Aug 13 00:46:41.562067 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:46:41.562077 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:46:41.562089 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:46:41.562099 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:46:41.562109 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:46:41.562119 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:46:41.562129 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:46:41.562139 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:46:41.562149 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:46:41.562159 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:46:41.562171 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:46:41.562181 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 00:46:41.562190 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:46:41.562200 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:46:41.562212 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:46:41.562221 kernel: fuse: init (API version 7.39) Aug 13 00:46:41.562231 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:46:41.562242 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:46:41.562254 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:46:41.562264 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:46:41.562274 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 00:46:41.562283 kernel: ACPI: bus type drm_connector registered Aug 13 00:46:41.562293 kernel: loop: module loaded Aug 13 00:46:41.562302 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:46:41.562312 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:46:41.562322 systemd[1]: Stopped verity-setup.service. Aug 13 00:46:41.562334 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:41.562364 systemd-journald[1119]: Collecting audit messages is disabled. Aug 13 00:46:41.562384 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:46:41.562394 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:46:41.562407 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:46:41.562417 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:46:41.562426 systemd-journald[1119]: Journal started Aug 13 00:46:41.562445 systemd-journald[1119]: Runtime Journal (/run/log/journal/21c76fa255e64a59a51b09d85023b73d) is 8M, max 78.3M, 70.3M free. Aug 13 00:46:41.563295 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:46:41.240721 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:46:41.249123 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 00:46:41.249645 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:46:41.569259 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:46:41.569965 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:46:41.570844 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:46:41.572839 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:46:41.573741 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:46:41.574163 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:46:41.576062 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:46:41.576828 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:46:41.578899 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:46:41.579113 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:46:41.579941 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:46:41.580142 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:46:41.581867 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:46:41.582072 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:46:41.583952 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:46:41.584168 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:46:41.586881 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:46:41.587760 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:46:41.588679 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:46:41.595628 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 00:46:41.608409 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:46:41.615629 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:46:41.620744 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:46:41.621857 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:46:41.622530 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:46:41.626270 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 00:46:41.631986 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:46:41.635583 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:46:41.636243 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:46:41.642591 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:46:41.644836 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:46:41.645516 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:46:41.650649 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:46:41.651243 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:46:41.659256 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:46:41.665345 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:46:41.668560 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:46:41.674819 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:46:41.675498 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:46:41.677509 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:46:41.708794 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:46:41.731720 systemd-journald[1119]: Time spent on flushing to /var/log/journal/21c76fa255e64a59a51b09d85023b73d is 66.293ms for 997 entries. Aug 13 00:46:41.731720 systemd-journald[1119]: System Journal (/var/log/journal/21c76fa255e64a59a51b09d85023b73d) is 8M, max 195.6M, 187.6M free. Aug 13 00:46:41.815536 systemd-journald[1119]: Received client request to flush runtime journal. Aug 13 00:46:41.815591 kernel: loop0: detected capacity change from 0 to 147912 Aug 13 00:46:41.815616 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:46:41.815630 kernel: loop1: detected capacity change from 0 to 224512 Aug 13 00:46:41.715495 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:46:41.717150 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:46:41.729584 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 00:46:41.739320 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 00:46:41.749679 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:46:41.771675 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. Aug 13 00:46:41.771688 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. Aug 13 00:46:41.788838 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 00:46:41.796004 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:46:41.806743 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:46:41.807661 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 00:46:41.819889 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:46:41.847508 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:46:41.854475 kernel: loop2: detected capacity change from 0 to 8 Aug 13 00:46:41.857005 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:46:41.872394 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Aug 13 00:46:41.872655 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Aug 13 00:46:41.878124 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:46:41.890485 kernel: loop3: detected capacity change from 0 to 138176 Aug 13 00:46:41.934485 kernel: loop4: detected capacity change from 0 to 147912 Aug 13 00:46:41.962684 kernel: loop5: detected capacity change from 0 to 224512 Aug 13 00:46:41.986924 kernel: loop6: detected capacity change from 0 to 8 Aug 13 00:46:41.994776 kernel: loop7: detected capacity change from 0 to 138176 Aug 13 00:46:42.015050 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Aug 13 00:46:42.016194 (sd-merge)[1194]: Merged extensions into '/usr'. Aug 13 00:46:42.021512 systemd[1]: Reload requested from client PID 1164 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:46:42.021585 systemd[1]: Reloading... Aug 13 00:46:42.134488 zram_generator::config[1225]: No configuration found. Aug 13 00:46:42.239556 ldconfig[1159]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:46:42.253548 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:46:42.309756 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:46:42.310415 systemd[1]: Reloading finished in 288 ms. Aug 13 00:46:42.335952 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:46:42.336952 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:46:42.337896 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:46:42.346834 systemd[1]: Starting ensure-sysext.service... Aug 13 00:46:42.350591 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:46:42.356156 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:46:42.376196 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:46:42.376625 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:46:42.377445 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:46:42.377699 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Aug 13 00:46:42.377768 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Aug 13 00:46:42.378513 systemd[1]: Reload requested from client PID 1266 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:46:42.378525 systemd[1]: Reloading... Aug 13 00:46:42.381912 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:46:42.381919 systemd-tmpfiles[1267]: Skipping /boot Aug 13 00:46:42.393945 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:46:42.394018 systemd-tmpfiles[1267]: Skipping /boot Aug 13 00:46:42.429145 systemd-udevd[1268]: Using default interface naming scheme 'v255'. Aug 13 00:46:42.484485 zram_generator::config[1299]: No configuration found. Aug 13 00:46:42.657493 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 00:46:42.664474 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (1311) Aug 13 00:46:42.683470 kernel: ACPI: button: Power Button [PWRF] Aug 13 00:46:42.701300 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:46:42.725518 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 00:46:42.731481 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 00:46:42.731521 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 13 00:46:42.733745 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 00:46:42.760480 kernel: EDAC MC: Ver: 3.0.0 Aug 13 00:46:42.791499 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:46:42.801173 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 00:46:42.801724 systemd[1]: Reloading finished in 422 ms. Aug 13 00:46:42.807883 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:46:42.820618 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:46:42.840008 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 00:46:42.843309 systemd[1]: Finished ensure-sysext.service. Aug 13 00:46:42.866750 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 00:46:42.867371 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:42.871602 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:46:42.876868 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:46:42.878505 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:46:42.882160 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 00:46:42.884394 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:46:42.887269 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:46:42.895909 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:46:42.896893 lvm[1377]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:46:42.905615 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:46:42.907669 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:46:42.910600 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:46:42.911238 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:46:42.914023 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:46:42.924030 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:46:42.932204 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:46:42.945850 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 00:46:42.948640 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:46:42.953199 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:46:42.953806 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:42.954990 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 00:46:42.956991 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:46:42.957188 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:46:42.958072 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:46:42.958270 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:46:42.960156 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:46:42.960358 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:46:42.961856 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:46:42.962554 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:46:42.975953 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:46:42.984669 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 00:46:42.985236 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:46:42.985297 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:46:42.987609 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:46:42.992518 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:46:42.996720 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:46:42.997787 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:46:43.006629 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:46:43.014687 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:46:43.030974 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:46:43.033606 augenrules[1420]: No rules Aug 13 00:46:43.041703 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:46:43.043385 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:46:43.043754 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:46:43.046102 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 00:46:43.051641 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:46:43.069836 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:46:43.136987 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:46:43.182170 systemd-resolved[1392]: Positive Trust Anchors: Aug 13 00:46:43.182489 systemd-resolved[1392]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:46:43.182601 systemd-resolved[1392]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:46:43.182787 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 00:46:43.183325 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:46:43.187344 systemd-networkd[1389]: lo: Link UP Aug 13 00:46:43.187355 systemd-networkd[1389]: lo: Gained carrier Aug 13 00:46:43.188255 systemd-resolved[1392]: Defaulting to hostname 'linux'. Aug 13 00:46:43.189783 systemd-networkd[1389]: Enumeration completed Aug 13 00:46:43.189847 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:46:43.190174 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:46:43.190178 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:46:43.190806 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:46:43.191120 systemd-networkd[1389]: eth0: Link UP Aug 13 00:46:43.191169 systemd-networkd[1389]: eth0: Gained carrier Aug 13 00:46:43.191245 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:46:43.191888 systemd[1]: Reached target network.target - Network. Aug 13 00:46:43.192419 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:46:43.192978 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:46:43.193652 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:46:43.194263 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:46:43.195436 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:46:43.196122 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:46:43.196715 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:46:43.197277 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:46:43.197307 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:46:43.197810 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:46:43.199446 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:46:43.201703 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:46:43.204674 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 00:46:43.205441 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 00:46:43.206142 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 00:46:43.209100 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:46:43.210069 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 00:46:43.211927 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 00:46:43.214600 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:46:43.217575 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:46:43.218281 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:46:43.218807 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:46:43.219329 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:46:43.219363 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:46:43.228599 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:46:43.231622 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 00:46:43.237921 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:46:43.240898 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:46:43.252692 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:46:43.253780 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:46:43.256652 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:46:43.267588 jq[1450]: false Aug 13 00:46:43.269899 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:46:43.284324 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:46:43.293246 extend-filesystems[1451]: Found loop4 Aug 13 00:46:43.293246 extend-filesystems[1451]: Found loop5 Aug 13 00:46:43.293246 extend-filesystems[1451]: Found loop6 Aug 13 00:46:43.293246 extend-filesystems[1451]: Found loop7 Aug 13 00:46:43.293246 extend-filesystems[1451]: Found sda Aug 13 00:46:43.293246 extend-filesystems[1451]: Found sda1 Aug 13 00:46:43.293246 extend-filesystems[1451]: Found sda2 Aug 13 00:46:43.293246 extend-filesystems[1451]: Found sda3 Aug 13 00:46:43.293246 extend-filesystems[1451]: Found usr Aug 13 00:46:43.293246 extend-filesystems[1451]: Found sda4 Aug 13 00:46:43.293246 extend-filesystems[1451]: Found sda6 Aug 13 00:46:43.293246 extend-filesystems[1451]: Found sda7 Aug 13 00:46:43.293246 extend-filesystems[1451]: Found sda9 Aug 13 00:46:43.293246 extend-filesystems[1451]: Checking size of /dev/sda9 Aug 13 00:46:43.374629 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 555003 blocks Aug 13 00:46:43.374659 kernel: EXT4-fs (sda9): resized filesystem to 555003 Aug 13 00:46:43.287633 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:46:43.374755 coreos-metadata[1448]: Aug 13 00:46:43.357 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 00:46:43.347261 dbus-daemon[1449]: [system] SELinux support is enabled Aug 13 00:46:43.375600 extend-filesystems[1451]: Resized partition /dev/sda9 Aug 13 00:46:43.294622 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:46:43.377023 extend-filesystems[1468]: resize2fs 1.47.1 (20-May-2024) Aug 13 00:46:43.377023 extend-filesystems[1468]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 00:46:43.377023 extend-filesystems[1468]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 00:46:43.377023 extend-filesystems[1468]: The filesystem on /dev/sda9 is now 555003 (4k) blocks long. Aug 13 00:46:43.298221 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:46:43.384367 extend-filesystems[1451]: Resized filesystem in /dev/sda9 Aug 13 00:46:43.298700 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:46:43.302640 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:46:43.336599 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:46:43.338549 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 00:46:43.351679 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:46:43.358767 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:46:43.359030 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:46:43.359574 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:46:43.359808 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:46:43.374885 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:46:43.375141 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:46:43.376020 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:46:43.376241 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:46:43.387649 update_engine[1462]: I20250813 00:46:43.387588 1462 main.cc:92] Flatcar Update Engine starting Aug 13 00:46:43.391585 update_engine[1462]: I20250813 00:46:43.388828 1462 update_check_scheduler.cc:74] Next update check in 2m29s Aug 13 00:46:43.395702 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:46:43.395751 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:46:43.396402 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:46:43.396422 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:46:43.398305 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:46:43.408618 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:46:43.410570 jq[1470]: true Aug 13 00:46:43.409867 (ntainerd)[1485]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:46:43.416268 tar[1477]: linux-amd64/LICENSE Aug 13 00:46:43.417306 tar[1477]: linux-amd64/helm Aug 13 00:46:43.452871 jq[1492]: true Aug 13 00:46:43.477788 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (1313) Aug 13 00:46:43.561368 bash[1513]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:46:43.563152 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:46:43.565686 locksmithd[1489]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:46:43.577974 systemd-logind[1461]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 00:46:43.578009 systemd-logind[1461]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:46:43.578532 systemd-logind[1461]: New seat seat0. Aug 13 00:46:43.588588 systemd[1]: Starting sshkeys.service... Aug 13 00:46:43.589136 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:46:43.615879 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 00:46:43.623899 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 00:46:43.697137 coreos-metadata[1522]: Aug 13 00:46:43.696 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 00:46:43.712526 systemd-networkd[1389]: eth0: DHCPv4 address 172.234.29.142/24, gateway 172.234.29.1 acquired from 23.40.197.124 Aug 13 00:46:43.712878 dbus-daemon[1449]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1389 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 00:46:43.716272 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Aug 13 00:46:43.727077 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 00:46:43.756536 containerd[1485]: time="2025-08-13T00:46:43.756447480Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Aug 13 00:46:43.818278 containerd[1485]: time="2025-08-13T00:46:43.818230470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:46:43.824594 containerd[1485]: time="2025-08-13T00:46:43.824557680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:46:43.824594 containerd[1485]: time="2025-08-13T00:46:43.824588100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:46:43.824667 containerd[1485]: time="2025-08-13T00:46:43.824603760Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:46:43.825791 containerd[1485]: time="2025-08-13T00:46:43.825757030Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 00:46:43.825791 containerd[1485]: time="2025-08-13T00:46:43.825789520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 00:46:43.829474 containerd[1485]: time="2025-08-13T00:46:43.825884400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:46:43.829474 containerd[1485]: time="2025-08-13T00:46:43.825905480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:46:43.829474 containerd[1485]: time="2025-08-13T00:46:43.826156980Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:46:43.829474 containerd[1485]: time="2025-08-13T00:46:43.826172050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:46:43.829474 containerd[1485]: time="2025-08-13T00:46:43.826185280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:46:43.829474 containerd[1485]: time="2025-08-13T00:46:43.826194680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:46:43.829474 containerd[1485]: time="2025-08-13T00:46:43.826289370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:46:43.829474 containerd[1485]: time="2025-08-13T00:46:43.827225140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:46:43.829474 containerd[1485]: time="2025-08-13T00:46:43.827372310Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:46:43.829474 containerd[1485]: time="2025-08-13T00:46:43.827384800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:46:43.829474 containerd[1485]: time="2025-08-13T00:46:43.827502250Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:46:43.829671 containerd[1485]: time="2025-08-13T00:46:43.827556530Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:46:43.831393 containerd[1485]: time="2025-08-13T00:46:43.831364930Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:46:43.831437 containerd[1485]: time="2025-08-13T00:46:43.831420140Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:46:43.831482 containerd[1485]: time="2025-08-13T00:46:43.831444250Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 00:46:43.831630 containerd[1485]: time="2025-08-13T00:46:43.831605920Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 00:46:43.831664 containerd[1485]: time="2025-08-13T00:46:43.831632310Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:46:43.831780 containerd[1485]: time="2025-08-13T00:46:43.831751090Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:46:43.832645 containerd[1485]: time="2025-08-13T00:46:43.832623250Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:46:43.832760 containerd[1485]: time="2025-08-13T00:46:43.832735930Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 00:46:43.832804 containerd[1485]: time="2025-08-13T00:46:43.832760890Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 00:46:43.832804 containerd[1485]: time="2025-08-13T00:46:43.832774600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 00:46:43.832804 containerd[1485]: time="2025-08-13T00:46:43.832787360Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:46:43.832804 containerd[1485]: time="2025-08-13T00:46:43.832799750Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:46:43.832867 containerd[1485]: time="2025-08-13T00:46:43.832811090Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:46:43.832867 containerd[1485]: time="2025-08-13T00:46:43.832823320Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:46:43.832867 containerd[1485]: time="2025-08-13T00:46:43.832835890Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:46:43.832867 containerd[1485]: time="2025-08-13T00:46:43.832847710Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:46:43.832867 containerd[1485]: time="2025-08-13T00:46:43.832858120Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:46:43.832867 containerd[1485]: time="2025-08-13T00:46:43.832868700Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:46:43.832966 containerd[1485]: time="2025-08-13T00:46:43.832887180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:46:43.832966 containerd[1485]: time="2025-08-13T00:46:43.832900020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:46:43.832966 containerd[1485]: time="2025-08-13T00:46:43.832911660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:46:43.832966 containerd[1485]: time="2025-08-13T00:46:43.832923330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:46:43.832966 containerd[1485]: time="2025-08-13T00:46:43.832940800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:46:43.832966 containerd[1485]: time="2025-08-13T00:46:43.832952680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:46:43.832966 containerd[1485]: time="2025-08-13T00:46:43.832962990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:46:43.833083 containerd[1485]: time="2025-08-13T00:46:43.832973390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:46:43.833083 containerd[1485]: time="2025-08-13T00:46:43.832986670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 00:46:43.833083 containerd[1485]: time="2025-08-13T00:46:43.832999980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 00:46:43.833083 containerd[1485]: time="2025-08-13T00:46:43.833010190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:46:43.833083 containerd[1485]: time="2025-08-13T00:46:43.833020960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 00:46:43.833083 containerd[1485]: time="2025-08-13T00:46:43.833031210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:46:43.833083 containerd[1485]: time="2025-08-13T00:46:43.833042510Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 00:46:43.833083 containerd[1485]: time="2025-08-13T00:46:43.833059750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 00:46:43.833083 containerd[1485]: time="2025-08-13T00:46:43.833070720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:46:43.833083 containerd[1485]: time="2025-08-13T00:46:43.833080660Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:46:43.836467 containerd[1485]: time="2025-08-13T00:46:43.833826990Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:46:43.836467 containerd[1485]: time="2025-08-13T00:46:43.833850460Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 00:46:43.836467 containerd[1485]: time="2025-08-13T00:46:43.833859960Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:46:43.836467 containerd[1485]: time="2025-08-13T00:46:43.833927420Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 00:46:43.836467 containerd[1485]: time="2025-08-13T00:46:43.834128920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:46:43.836467 containerd[1485]: time="2025-08-13T00:46:43.834139850Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 00:46:43.836467 containerd[1485]: time="2025-08-13T00:46:43.834149720Z" level=info msg="NRI interface is disabled by configuration." Aug 13 00:46:43.836467 containerd[1485]: time="2025-08-13T00:46:43.834169170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:46:43.836622 containerd[1485]: time="2025-08-13T00:46:43.835480330Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:46:43.836622 containerd[1485]: time="2025-08-13T00:46:43.835521440Z" level=info msg="Connect containerd service" Aug 13 00:46:43.836622 containerd[1485]: time="2025-08-13T00:46:43.835541350Z" level=info msg="using legacy CRI server" Aug 13 00:46:43.836622 containerd[1485]: time="2025-08-13T00:46:43.835547220Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:46:43.836622 containerd[1485]: time="2025-08-13T00:46:43.835627570Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:46:43.838465 containerd[1485]: time="2025-08-13T00:46:43.837343660Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:46:43.838465 containerd[1485]: time="2025-08-13T00:46:43.837623310Z" level=info msg="Start subscribing containerd event" Aug 13 00:46:43.838465 containerd[1485]: time="2025-08-13T00:46:43.837660050Z" level=info msg="Start recovering state" Aug 13 00:46:43.838465 containerd[1485]: time="2025-08-13T00:46:43.837717630Z" level=info msg="Start event monitor" Aug 13 00:46:43.838465 containerd[1485]: time="2025-08-13T00:46:43.837741200Z" level=info msg="Start snapshots syncer" Aug 13 00:46:43.838465 containerd[1485]: time="2025-08-13T00:46:43.837750790Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:46:43.838465 containerd[1485]: time="2025-08-13T00:46:43.837757830Z" level=info msg="Start streaming server" Aug 13 00:46:43.838690 containerd[1485]: time="2025-08-13T00:46:43.838651090Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:46:43.838771 containerd[1485]: time="2025-08-13T00:46:43.838710950Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:46:43.838857 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:46:43.840603 containerd[1485]: time="2025-08-13T00:46:43.840567580Z" level=info msg="containerd successfully booted in 0.088769s" Aug 13 00:46:43.887971 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 00:46:43.889915 dbus-daemon[1449]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 00:46:43.893054 dbus-daemon[1449]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1526 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 00:46:43.902817 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 00:46:43.908355 sshd_keygen[1491]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:46:43.918985 polkitd[1531]: Started polkitd version 121 Aug 13 00:46:43.927726 polkitd[1531]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 00:46:43.927789 polkitd[1531]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 00:46:43.930340 polkitd[1531]: Finished loading, compiling and executing 2 rules Aug 13 00:46:43.931771 dbus-daemon[1449]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 00:46:43.932581 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 00:46:43.932710 polkitd[1531]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 00:46:43.937865 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:46:43.945712 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:46:43.950609 systemd-hostnamed[1526]: Hostname set to <172-234-29-142> (transient) Aug 13 00:46:43.951243 systemd-resolved[1392]: System hostname changed to '172-234-29-142'. Aug 13 00:46:43.959841 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:46:43.960112 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:46:43.968751 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:46:43.978983 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:46:43.986790 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:46:43.990626 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 00:46:43.991539 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:46:44.097621 tar[1477]: linux-amd64/README.md Aug 13 00:46:44.112653 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:46:44.369509 coreos-metadata[1448]: Aug 13 00:46:44.369 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 00:46:44.471077 coreos-metadata[1448]: Aug 13 00:46:44.471 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Aug 13 00:46:44.683843 coreos-metadata[1448]: Aug 13 00:46:44.683 INFO Fetch successful Aug 13 00:46:44.683843 coreos-metadata[1448]: Aug 13 00:46:44.683 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Aug 13 00:46:44.707348 coreos-metadata[1522]: Aug 13 00:46:44.707 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 00:46:44.808384 coreos-metadata[1522]: Aug 13 00:46:44.808 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Aug 13 00:46:44.965923 coreos-metadata[1522]: Aug 13 00:46:44.965 INFO Fetch successful Aug 13 00:46:44.983229 update-ssh-keys[1564]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:46:44.983738 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 00:46:44.987196 systemd[1]: Finished sshkeys.service. Aug 13 00:46:45.013172 coreos-metadata[1448]: Aug 13 00:46:45.013 INFO Fetch successful Aug 13 00:46:45.068604 systemd-networkd[1389]: eth0: Gained IPv6LL Aug 13 00:46:45.070349 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Aug 13 00:46:45.073287 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:46:45.075385 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:46:45.080445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:46:45.088634 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:46:45.090723 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 00:46:45.094141 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:46:45.107686 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:46:45.657669 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:46:45.665126 systemd[1]: Started sshd@0-172.234.29.142:22-139.178.89.65:41760.service - OpenSSH per-connection server daemon (139.178.89.65:41760). Aug 13 00:46:45.954181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:46:45.955249 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:46:45.956419 systemd[1]: Startup finished in 801ms (kernel) + 8.954s (initrd) + 5.349s (userspace) = 15.105s. Aug 13 00:46:45.958425 (kubelet)[1606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:46:45.998594 sshd[1599]: Accepted publickey for core from 139.178.89.65 port 41760 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:46:46.001409 sshd-session[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:46.008937 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:46:46.017855 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:46:46.034842 systemd-logind[1461]: New session 1 of user core. Aug 13 00:46:46.040570 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:46:46.049874 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:46:46.053489 (systemd)[1613]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:46:46.056509 systemd-logind[1461]: New session c1 of user core. Aug 13 00:46:46.180261 systemd[1613]: Queued start job for default target default.target. Aug 13 00:46:46.185828 systemd[1613]: Created slice app.slice - User Application Slice. Aug 13 00:46:46.185853 systemd[1613]: Reached target paths.target - Paths. Aug 13 00:46:46.185895 systemd[1613]: Reached target timers.target - Timers. Aug 13 00:46:46.188426 systemd[1613]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:46:46.199425 systemd[1613]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:46:46.199575 systemd[1613]: Reached target sockets.target - Sockets. Aug 13 00:46:46.199617 systemd[1613]: Reached target basic.target - Basic System. Aug 13 00:46:46.199660 systemd[1613]: Reached target default.target - Main User Target. Aug 13 00:46:46.199690 systemd[1613]: Startup finished in 135ms. Aug 13 00:46:46.199853 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:46:46.203604 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:46:46.452606 kubelet[1606]: E0813 00:46:46.452467 1606 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:46:46.458101 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:46:46.458282 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:46:46.458617 systemd[1]: kubelet.service: Consumed 849ms CPU time, 266M memory peak. Aug 13 00:46:46.464737 systemd[1]: Started sshd@1-172.234.29.142:22-139.178.89.65:41770.service - OpenSSH per-connection server daemon (139.178.89.65:41770). Aug 13 00:46:46.569960 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Aug 13 00:46:46.797009 sshd[1630]: Accepted publickey for core from 139.178.89.65 port 41770 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:46:46.798918 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:46.803294 systemd-logind[1461]: New session 2 of user core. Aug 13 00:46:46.812542 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:46:47.051381 sshd[1632]: Connection closed by 139.178.89.65 port 41770 Aug 13 00:46:47.051893 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:47.055483 systemd-logind[1461]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:46:47.056266 systemd[1]: sshd@1-172.234.29.142:22-139.178.89.65:41770.service: Deactivated successfully. Aug 13 00:46:47.058166 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:46:47.059342 systemd-logind[1461]: Removed session 2. Aug 13 00:46:47.108518 systemd[1]: Started sshd@2-172.234.29.142:22-139.178.89.65:41784.service - OpenSSH per-connection server daemon (139.178.89.65:41784). Aug 13 00:46:47.430868 sshd[1638]: Accepted publickey for core from 139.178.89.65 port 41784 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:46:47.432401 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:47.436204 systemd-logind[1461]: New session 3 of user core. Aug 13 00:46:47.445577 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:46:47.668946 sshd[1640]: Connection closed by 139.178.89.65 port 41784 Aug 13 00:46:47.669758 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:47.672681 systemd[1]: sshd@2-172.234.29.142:22-139.178.89.65:41784.service: Deactivated successfully. Aug 13 00:46:47.674414 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:46:47.675812 systemd-logind[1461]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:46:47.676637 systemd-logind[1461]: Removed session 3. Aug 13 00:46:47.732685 systemd[1]: Started sshd@3-172.234.29.142:22-139.178.89.65:41790.service - OpenSSH per-connection server daemon (139.178.89.65:41790). Aug 13 00:46:48.059985 sshd[1646]: Accepted publickey for core from 139.178.89.65 port 41790 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:46:48.061575 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:48.066426 systemd-logind[1461]: New session 4 of user core. Aug 13 00:46:48.072575 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:46:48.205252 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Aug 13 00:46:48.305743 sshd[1648]: Connection closed by 139.178.89.65 port 41790 Aug 13 00:46:48.306422 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:48.310501 systemd-logind[1461]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:46:48.311408 systemd[1]: sshd@3-172.234.29.142:22-139.178.89.65:41790.service: Deactivated successfully. Aug 13 00:46:48.313414 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:46:48.314655 systemd-logind[1461]: Removed session 4. Aug 13 00:46:48.372420 systemd[1]: Started sshd@4-172.234.29.142:22-139.178.89.65:35154.service - OpenSSH per-connection server daemon (139.178.89.65:35154). Aug 13 00:46:48.707744 sshd[1654]: Accepted publickey for core from 139.178.89.65 port 35154 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:46:48.709436 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:48.714145 systemd-logind[1461]: New session 5 of user core. Aug 13 00:46:48.719567 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:46:48.918543 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:46:48.919015 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:46:48.932088 sudo[1657]: pam_unix(sudo:session): session closed for user root Aug 13 00:46:48.983831 sshd[1656]: Connection closed by 139.178.89.65 port 35154 Aug 13 00:46:48.984498 sshd-session[1654]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:48.988147 systemd[1]: sshd@4-172.234.29.142:22-139.178.89.65:35154.service: Deactivated successfully. Aug 13 00:46:48.989804 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:46:48.990529 systemd-logind[1461]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:46:48.991247 systemd-logind[1461]: Removed session 5. Aug 13 00:46:49.048683 systemd[1]: Started sshd@5-172.234.29.142:22-139.178.89.65:35164.service - OpenSSH per-connection server daemon (139.178.89.65:35164). Aug 13 00:46:49.368884 sshd[1663]: Accepted publickey for core from 139.178.89.65 port 35164 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:46:49.370597 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:49.375286 systemd-logind[1461]: New session 6 of user core. Aug 13 00:46:49.386573 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:46:49.562588 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:46:49.562853 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:46:49.565770 sudo[1667]: pam_unix(sudo:session): session closed for user root Aug 13 00:46:49.570445 sudo[1666]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 00:46:49.570761 sudo[1666]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:46:49.589687 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:46:49.614026 augenrules[1689]: No rules Aug 13 00:46:49.615551 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:46:49.615800 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:46:49.616649 sudo[1666]: pam_unix(sudo:session): session closed for user root Aug 13 00:46:49.666647 sshd[1665]: Connection closed by 139.178.89.65 port 35164 Aug 13 00:46:49.667365 sshd-session[1663]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:49.670046 systemd[1]: sshd@5-172.234.29.142:22-139.178.89.65:35164.service: Deactivated successfully. Aug 13 00:46:49.672208 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:46:49.673895 systemd-logind[1461]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:46:49.675066 systemd-logind[1461]: Removed session 6. Aug 13 00:46:49.733589 systemd[1]: Started sshd@6-172.234.29.142:22-139.178.89.65:35174.service - OpenSSH per-connection server daemon (139.178.89.65:35174). Aug 13 00:46:50.072017 sshd[1698]: Accepted publickey for core from 139.178.89.65 port 35174 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:46:50.073788 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:50.078562 systemd-logind[1461]: New session 7 of user core. Aug 13 00:46:50.085569 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:46:50.273324 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:46:50.273664 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:46:50.529642 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:46:50.530659 (dockerd)[1719]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:46:50.764866 dockerd[1719]: time="2025-08-13T00:46:50.764799000Z" level=info msg="Starting up" Aug 13 00:46:50.850205 dockerd[1719]: time="2025-08-13T00:46:50.849721000Z" level=info msg="Loading containers: start." Aug 13 00:46:50.994476 kernel: Initializing XFRM netlink socket Aug 13 00:46:51.017561 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Aug 13 00:46:51.025149 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Aug 13 00:46:51.066992 systemd-networkd[1389]: docker0: Link UP Aug 13 00:46:51.067280 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Aug 13 00:46:51.092708 dockerd[1719]: time="2025-08-13T00:46:51.092676220Z" level=info msg="Loading containers: done." Aug 13 00:46:51.105410 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck658505718-merged.mount: Deactivated successfully. Aug 13 00:46:51.106849 dockerd[1719]: time="2025-08-13T00:46:51.106821520Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:46:51.106925 dockerd[1719]: time="2025-08-13T00:46:51.106908030Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Aug 13 00:46:51.107028 dockerd[1719]: time="2025-08-13T00:46:51.107013490Z" level=info msg="Daemon has completed initialization" Aug 13 00:46:51.132078 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:46:51.132611 dockerd[1719]: time="2025-08-13T00:46:51.131992100Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:46:51.676002 containerd[1485]: time="2025-08-13T00:46:51.675954330Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 13 00:46:52.454237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount667986613.mount: Deactivated successfully. Aug 13 00:46:53.646378 containerd[1485]: time="2025-08-13T00:46:53.646305370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:53.647471 containerd[1485]: time="2025-08-13T00:46:53.647427580Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=28799994" Aug 13 00:46:53.649473 containerd[1485]: time="2025-08-13T00:46:53.647964390Z" level=info msg="ImageCreate event name:\"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:53.652975 containerd[1485]: time="2025-08-13T00:46:53.652941540Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"28796794\" in 1.9769511s" Aug 13 00:46:53.652975 containerd[1485]: time="2025-08-13T00:46:53.652973060Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\"" Aug 13 00:46:53.653368 containerd[1485]: time="2025-08-13T00:46:53.653334270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:53.653931 containerd[1485]: time="2025-08-13T00:46:53.653912700Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 13 00:46:55.356554 containerd[1485]: time="2025-08-13T00:46:55.356484260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:55.357552 containerd[1485]: time="2025-08-13T00:46:55.357378080Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=24783636" Aug 13 00:46:55.359188 containerd[1485]: time="2025-08-13T00:46:55.358033060Z" level=info msg="ImageCreate event name:\"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:55.360354 containerd[1485]: time="2025-08-13T00:46:55.360322540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:55.361563 containerd[1485]: time="2025-08-13T00:46:55.361527580Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"26385470\" in 1.70759059s" Aug 13 00:46:55.361563 containerd[1485]: time="2025-08-13T00:46:55.361557520Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\"" Aug 13 00:46:55.362200 containerd[1485]: time="2025-08-13T00:46:55.362155890Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 13 00:46:56.708758 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:46:56.715747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:46:56.760697 containerd[1485]: time="2025-08-13T00:46:56.760655600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:56.761580 containerd[1485]: time="2025-08-13T00:46:56.761534400Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=19176921" Aug 13 00:46:56.763419 containerd[1485]: time="2025-08-13T00:46:56.762315160Z" level=info msg="ImageCreate event name:\"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:56.764638 containerd[1485]: time="2025-08-13T00:46:56.764608080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:56.765952 containerd[1485]: time="2025-08-13T00:46:56.765930540Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"20778773\" in 1.4037475s" Aug 13 00:46:56.766035 containerd[1485]: time="2025-08-13T00:46:56.766021550Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\"" Aug 13 00:46:56.767211 containerd[1485]: time="2025-08-13T00:46:56.767060470Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 00:46:56.887311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:46:56.891950 (kubelet)[1975]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:46:56.933240 kubelet[1975]: E0813 00:46:56.933186 1975 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:46:56.938022 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:46:56.938217 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:46:56.938675 systemd[1]: kubelet.service: Consumed 196ms CPU time, 110.5M memory peak. Aug 13 00:46:57.999649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3777951886.mount: Deactivated successfully. Aug 13 00:46:58.297826 containerd[1485]: time="2025-08-13T00:46:58.297682180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:58.298817 containerd[1485]: time="2025-08-13T00:46:58.298767660Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=30895380" Aug 13 00:46:58.299487 containerd[1485]: time="2025-08-13T00:46:58.299438080Z" level=info msg="ImageCreate event name:\"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:58.300998 containerd[1485]: time="2025-08-13T00:46:58.300965390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:58.301670 containerd[1485]: time="2025-08-13T00:46:58.301641100Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"30894399\" in 1.53433742s" Aug 13 00:46:58.301745 containerd[1485]: time="2025-08-13T00:46:58.301730800Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Aug 13 00:46:58.302569 containerd[1485]: time="2025-08-13T00:46:58.302538590Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:46:59.078605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3672378816.mount: Deactivated successfully. Aug 13 00:46:59.805964 containerd[1485]: time="2025-08-13T00:46:59.805001140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:59.805964 containerd[1485]: time="2025-08-13T00:46:59.805906250Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 00:46:59.806880 containerd[1485]: time="2025-08-13T00:46:59.806856490Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:59.814389 containerd[1485]: time="2025-08-13T00:46:59.813118440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:59.815080 containerd[1485]: time="2025-08-13T00:46:59.815034010Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.51238512s" Aug 13 00:46:59.815080 containerd[1485]: time="2025-08-13T00:46:59.815069340Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:46:59.815615 containerd[1485]: time="2025-08-13T00:46:59.815591630Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:47:00.490244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3622407823.mount: Deactivated successfully. Aug 13 00:47:00.494370 containerd[1485]: time="2025-08-13T00:47:00.494309540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:00.495051 containerd[1485]: time="2025-08-13T00:47:00.495009070Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 00:47:00.496471 containerd[1485]: time="2025-08-13T00:47:00.495294900Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:00.497478 containerd[1485]: time="2025-08-13T00:47:00.497054070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:00.498247 containerd[1485]: time="2025-08-13T00:47:00.497834220Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 682.1716ms" Aug 13 00:47:00.498247 containerd[1485]: time="2025-08-13T00:47:00.497862340Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:47:00.498580 containerd[1485]: time="2025-08-13T00:47:00.498539130Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 00:47:01.274715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1962738636.mount: Deactivated successfully. Aug 13 00:47:03.017262 containerd[1485]: time="2025-08-13T00:47:03.016073300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:03.017262 containerd[1485]: time="2025-08-13T00:47:03.017006580Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Aug 13 00:47:03.017262 containerd[1485]: time="2025-08-13T00:47:03.017220150Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:03.023132 containerd[1485]: time="2025-08-13T00:47:03.023109150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:03.025119 containerd[1485]: time="2025-08-13T00:47:03.025085560Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.52651383s" Aug 13 00:47:03.025166 containerd[1485]: time="2025-08-13T00:47:03.025123090Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 00:47:05.120304 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:05.120466 systemd[1]: kubelet.service: Consumed 196ms CPU time, 110.5M memory peak. Aug 13 00:47:05.127624 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:47:05.163641 systemd[1]: Reload requested from client PID 2130 ('systemctl') (unit session-7.scope)... Aug 13 00:47:05.163797 systemd[1]: Reloading... Aug 13 00:47:05.306489 zram_generator::config[2173]: No configuration found. Aug 13 00:47:05.410222 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:47:05.501847 systemd[1]: Reloading finished in 337 ms. Aug 13 00:47:05.553899 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:05.558871 (kubelet)[2220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:47:05.561983 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:47:05.563110 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:47:05.563644 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:05.563680 systemd[1]: kubelet.service: Consumed 146ms CPU time, 99.4M memory peak. Aug 13 00:47:05.577970 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:47:05.728291 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:05.732312 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:47:05.769553 kubelet[2232]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:47:05.769553 kubelet[2232]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:47:05.769553 kubelet[2232]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:47:05.769881 kubelet[2232]: I0813 00:47:05.769603 2232 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:47:06.171777 kubelet[2232]: I0813 00:47:06.171652 2232 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:47:06.172293 kubelet[2232]: I0813 00:47:06.171909 2232 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:47:06.172624 kubelet[2232]: I0813 00:47:06.172597 2232 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:47:06.197864 kubelet[2232]: E0813 00:47:06.197840 2232 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.234.29.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.234.29.142:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:06.198738 kubelet[2232]: I0813 00:47:06.198724 2232 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:47:06.205788 kubelet[2232]: E0813 00:47:06.205761 2232 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:47:06.205788 kubelet[2232]: I0813 00:47:06.205784 2232 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:47:06.209407 kubelet[2232]: I0813 00:47:06.209384 2232 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:47:06.210825 kubelet[2232]: I0813 00:47:06.210785 2232 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:47:06.210965 kubelet[2232]: I0813 00:47:06.210817 2232 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-29-142","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:47:06.210965 kubelet[2232]: I0813 00:47:06.210962 2232 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:47:06.211077 kubelet[2232]: I0813 00:47:06.210971 2232 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:47:06.211100 kubelet[2232]: I0813 00:47:06.211092 2232 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:47:06.214895 kubelet[2232]: I0813 00:47:06.214812 2232 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:47:06.214895 kubelet[2232]: I0813 00:47:06.214851 2232 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:47:06.214895 kubelet[2232]: I0813 00:47:06.214866 2232 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:47:06.214895 kubelet[2232]: I0813 00:47:06.214875 2232 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:47:06.221098 kubelet[2232]: W0813 00:47:06.221051 2232 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.234.29.142:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-29-142&limit=500&resourceVersion=0": dial tcp 172.234.29.142:6443: connect: connection refused Aug 13 00:47:06.221138 kubelet[2232]: E0813 00:47:06.221110 2232 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.234.29.142:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-29-142&limit=500&resourceVersion=0\": dial tcp 172.234.29.142:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:06.221754 kubelet[2232]: I0813 00:47:06.221351 2232 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 13 00:47:06.221754 kubelet[2232]: I0813 00:47:06.221653 2232 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:47:06.223822 kubelet[2232]: W0813 00:47:06.222780 2232 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:47:06.225476 kubelet[2232]: I0813 00:47:06.224988 2232 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:47:06.225476 kubelet[2232]: I0813 00:47:06.225017 2232 server.go:1287] "Started kubelet" Aug 13 00:47:06.226503 kubelet[2232]: W0813 00:47:06.226446 2232 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.234.29.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.234.29.142:6443: connect: connection refused Aug 13 00:47:06.226562 kubelet[2232]: E0813 00:47:06.226538 2232 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.234.29.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.29.142:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:06.228473 kubelet[2232]: I0813 00:47:06.227850 2232 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:47:06.228473 kubelet[2232]: I0813 00:47:06.228188 2232 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:47:06.228831 kubelet[2232]: I0813 00:47:06.228816 2232 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:47:06.230561 kubelet[2232]: I0813 00:47:06.230525 2232 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:47:06.230638 kubelet[2232]: I0813 00:47:06.230616 2232 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:47:06.230836 kubelet[2232]: I0813 00:47:06.230822 2232 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:47:06.232249 kubelet[2232]: I0813 00:47:06.232229 2232 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:47:06.232399 kubelet[2232]: E0813 00:47:06.232370 2232 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-29-142\" not found" Aug 13 00:47:06.237021 kubelet[2232]: E0813 00:47:06.235297 2232 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.234.29.142:6443/api/v1/namespaces/default/events\": dial tcp 172.234.29.142:6443: connect: connection refused" event="&Event{ObjectMeta:{172-234-29-142.185b2d0fed9a20f2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-234-29-142,UID:172-234-29-142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-234-29-142,},FirstTimestamp:2025-08-13 00:47:06.22500069 +0000 UTC m=+0.487790191,LastTimestamp:2025-08-13 00:47:06.22500069 +0000 UTC m=+0.487790191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-234-29-142,}" Aug 13 00:47:06.237117 kubelet[2232]: E0813 00:47:06.237058 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.29.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-29-142?timeout=10s\": dial tcp 172.234.29.142:6443: connect: connection refused" interval="200ms" Aug 13 00:47:06.237289 kubelet[2232]: I0813 00:47:06.237266 2232 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:47:06.237879 kubelet[2232]: I0813 00:47:06.237344 2232 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:47:06.238814 kubelet[2232]: I0813 00:47:06.238557 2232 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:47:06.238814 kubelet[2232]: I0813 00:47:06.238604 2232 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:47:06.240690 kubelet[2232]: I0813 00:47:06.240677 2232 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:47:06.251928 kubelet[2232]: I0813 00:47:06.251902 2232 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:47:06.253076 kubelet[2232]: I0813 00:47:06.253063 2232 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:47:06.253142 kubelet[2232]: I0813 00:47:06.253132 2232 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:47:06.253201 kubelet[2232]: I0813 00:47:06.253190 2232 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:47:06.253244 kubelet[2232]: I0813 00:47:06.253236 2232 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:47:06.253505 kubelet[2232]: E0813 00:47:06.253308 2232 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:47:06.260211 kubelet[2232]: W0813 00:47:06.260178 2232 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.234.29.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.234.29.142:6443: connect: connection refused Aug 13 00:47:06.260277 kubelet[2232]: E0813 00:47:06.260215 2232 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.234.29.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.29.142:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:06.260303 kubelet[2232]: W0813 00:47:06.260280 2232 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.234.29.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.234.29.142:6443: connect: connection refused Aug 13 00:47:06.260335 kubelet[2232]: E0813 00:47:06.260302 2232 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.234.29.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.29.142:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:06.264218 kubelet[2232]: E0813 00:47:06.264180 2232 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:47:06.268627 kubelet[2232]: I0813 00:47:06.268616 2232 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:47:06.268856 kubelet[2232]: I0813 00:47:06.268672 2232 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:47:06.268856 kubelet[2232]: I0813 00:47:06.268688 2232 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:47:06.270199 kubelet[2232]: I0813 00:47:06.270008 2232 policy_none.go:49] "None policy: Start" Aug 13 00:47:06.270199 kubelet[2232]: I0813 00:47:06.270023 2232 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:47:06.270199 kubelet[2232]: I0813 00:47:06.270034 2232 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:47:06.275482 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 00:47:06.288889 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 00:47:06.291811 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 00:47:06.302649 kubelet[2232]: I0813 00:47:06.302589 2232 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:47:06.303014 kubelet[2232]: I0813 00:47:06.302988 2232 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:47:06.304098 kubelet[2232]: I0813 00:47:06.303014 2232 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:47:06.304098 kubelet[2232]: I0813 00:47:06.303519 2232 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:47:06.304098 kubelet[2232]: E0813 00:47:06.303931 2232 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:47:06.304098 kubelet[2232]: E0813 00:47:06.303956 2232 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-234-29-142\" not found" Aug 13 00:47:06.361557 systemd[1]: Created slice kubepods-burstable-podd6cff6541a928c9e2060ccd00cece64c.slice - libcontainer container kubepods-burstable-podd6cff6541a928c9e2060ccd00cece64c.slice. Aug 13 00:47:06.377236 kubelet[2232]: E0813 00:47:06.377216 2232 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-29-142\" not found" node="172-234-29-142" Aug 13 00:47:06.380021 systemd[1]: Created slice kubepods-burstable-pod0accc450fe4c4db45f962fef3237e76f.slice - libcontainer container kubepods-burstable-pod0accc450fe4c4db45f962fef3237e76f.slice. Aug 13 00:47:06.388677 kubelet[2232]: E0813 00:47:06.388660 2232 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-29-142\" not found" node="172-234-29-142" Aug 13 00:47:06.392207 systemd[1]: Created slice kubepods-burstable-podef65c24b2fa2737e9b9d86d8d5dfea9a.slice - libcontainer container kubepods-burstable-podef65c24b2fa2737e9b9d86d8d5dfea9a.slice. Aug 13 00:47:06.393644 kubelet[2232]: E0813 00:47:06.393621 2232 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-29-142\" not found" node="172-234-29-142" Aug 13 00:47:06.405329 kubelet[2232]: I0813 00:47:06.405303 2232 kubelet_node_status.go:75] "Attempting to register node" node="172-234-29-142" Aug 13 00:47:06.405676 kubelet[2232]: E0813 00:47:06.405640 2232 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.29.142:6443/api/v1/nodes\": dial tcp 172.234.29.142:6443: connect: connection refused" node="172-234-29-142" Aug 13 00:47:06.438358 kubelet[2232]: E0813 00:47:06.438268 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.29.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-29-142?timeout=10s\": dial tcp 172.234.29.142:6443: connect: connection refused" interval="400ms" Aug 13 00:47:06.540001 kubelet[2232]: I0813 00:47:06.539935 2232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0accc450fe4c4db45f962fef3237e76f-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-29-142\" (UID: \"0accc450fe4c4db45f962fef3237e76f\") " pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:47:06.540001 kubelet[2232]: I0813 00:47:06.539967 2232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d6cff6541a928c9e2060ccd00cece64c-k8s-certs\") pod \"kube-apiserver-172-234-29-142\" (UID: \"d6cff6541a928c9e2060ccd00cece64c\") " pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:47:06.540001 kubelet[2232]: I0813 00:47:06.539984 2232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0accc450fe4c4db45f962fef3237e76f-ca-certs\") pod \"kube-controller-manager-172-234-29-142\" (UID: \"0accc450fe4c4db45f962fef3237e76f\") " pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:47:06.540001 kubelet[2232]: I0813 00:47:06.539998 2232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0accc450fe4c4db45f962fef3237e76f-flexvolume-dir\") pod \"kube-controller-manager-172-234-29-142\" (UID: \"0accc450fe4c4db45f962fef3237e76f\") " pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:47:06.540001 kubelet[2232]: I0813 00:47:06.540014 2232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0accc450fe4c4db45f962fef3237e76f-k8s-certs\") pod \"kube-controller-manager-172-234-29-142\" (UID: \"0accc450fe4c4db45f962fef3237e76f\") " pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:47:06.540321 kubelet[2232]: I0813 00:47:06.540030 2232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0accc450fe4c4db45f962fef3237e76f-kubeconfig\") pod \"kube-controller-manager-172-234-29-142\" (UID: \"0accc450fe4c4db45f962fef3237e76f\") " pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:47:06.540321 kubelet[2232]: I0813 00:47:06.540046 2232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef65c24b2fa2737e9b9d86d8d5dfea9a-kubeconfig\") pod \"kube-scheduler-172-234-29-142\" (UID: \"ef65c24b2fa2737e9b9d86d8d5dfea9a\") " pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:47:06.540321 kubelet[2232]: I0813 00:47:06.540060 2232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d6cff6541a928c9e2060ccd00cece64c-ca-certs\") pod \"kube-apiserver-172-234-29-142\" (UID: \"d6cff6541a928c9e2060ccd00cece64c\") " pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:47:06.540321 kubelet[2232]: I0813 00:47:06.540079 2232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6cff6541a928c9e2060ccd00cece64c-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-29-142\" (UID: \"d6cff6541a928c9e2060ccd00cece64c\") " pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:47:06.608042 kubelet[2232]: I0813 00:47:06.607742 2232 kubelet_node_status.go:75] "Attempting to register node" node="172-234-29-142" Aug 13 00:47:06.608042 kubelet[2232]: E0813 00:47:06.608007 2232 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.29.142:6443/api/v1/nodes\": dial tcp 172.234.29.142:6443: connect: connection refused" node="172-234-29-142" Aug 13 00:47:06.678175 kubelet[2232]: E0813 00:47:06.678112 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:06.678967 containerd[1485]: time="2025-08-13T00:47:06.678878070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-29-142,Uid:d6cff6541a928c9e2060ccd00cece64c,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:06.689641 kubelet[2232]: E0813 00:47:06.689552 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:06.690188 containerd[1485]: time="2025-08-13T00:47:06.689903880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-29-142,Uid:0accc450fe4c4db45f962fef3237e76f,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:06.694584 kubelet[2232]: E0813 00:47:06.694553 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:06.694995 containerd[1485]: time="2025-08-13T00:47:06.694953330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-29-142,Uid:ef65c24b2fa2737e9b9d86d8d5dfea9a,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:06.839192 kubelet[2232]: E0813 00:47:06.839137 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.29.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-29-142?timeout=10s\": dial tcp 172.234.29.142:6443: connect: connection refused" interval="800ms" Aug 13 00:47:07.010090 kubelet[2232]: I0813 00:47:07.009985 2232 kubelet_node_status.go:75] "Attempting to register node" node="172-234-29-142" Aug 13 00:47:07.010505 kubelet[2232]: E0813 00:47:07.010480 2232 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.29.142:6443/api/v1/nodes\": dial tcp 172.234.29.142:6443: connect: connection refused" node="172-234-29-142" Aug 13 00:47:07.187718 kubelet[2232]: W0813 00:47:07.187674 2232 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.234.29.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.234.29.142:6443: connect: connection refused Aug 13 00:47:07.187777 kubelet[2232]: E0813 00:47:07.187724 2232 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.234.29.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.29.142:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:07.411550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1923715330.mount: Deactivated successfully. Aug 13 00:47:07.416062 containerd[1485]: time="2025-08-13T00:47:07.416017310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:47:07.417183 kubelet[2232]: W0813 00:47:07.417080 2232 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.234.29.142:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-29-142&limit=500&resourceVersion=0": dial tcp 172.234.29.142:6443: connect: connection refused Aug 13 00:47:07.417183 kubelet[2232]: E0813 00:47:07.417148 2232 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.234.29.142:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-29-142&limit=500&resourceVersion=0\": dial tcp 172.234.29.142:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:07.417316 containerd[1485]: time="2025-08-13T00:47:07.417112610Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:47:07.418513 containerd[1485]: time="2025-08-13T00:47:07.418448720Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 00:47:07.418852 containerd[1485]: time="2025-08-13T00:47:07.418812830Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:47:07.420056 containerd[1485]: time="2025-08-13T00:47:07.419975450Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:47:07.420578 containerd[1485]: time="2025-08-13T00:47:07.420532680Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:47:07.423472 containerd[1485]: time="2025-08-13T00:47:07.423422830Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:47:07.425216 containerd[1485]: time="2025-08-13T00:47:07.424722100Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 745.72556ms" Aug 13 00:47:07.426791 containerd[1485]: time="2025-08-13T00:47:07.425636700Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 730.57262ms" Aug 13 00:47:07.426791 containerd[1485]: time="2025-08-13T00:47:07.425818780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:47:07.427884 containerd[1485]: time="2025-08-13T00:47:07.427858380Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 737.90547ms" Aug 13 00:47:07.519486 containerd[1485]: time="2025-08-13T00:47:07.518965500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:47:07.519597 containerd[1485]: time="2025-08-13T00:47:07.519536430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:47:07.519663 containerd[1485]: time="2025-08-13T00:47:07.519604980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:47:07.520678 containerd[1485]: time="2025-08-13T00:47:07.520590290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:47:07.523770 containerd[1485]: time="2025-08-13T00:47:07.523591630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:47:07.523770 containerd[1485]: time="2025-08-13T00:47:07.523628240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:47:07.523770 containerd[1485]: time="2025-08-13T00:47:07.523641040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:47:07.523770 containerd[1485]: time="2025-08-13T00:47:07.523700110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:47:07.526128 containerd[1485]: time="2025-08-13T00:47:07.526017710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:47:07.526302 containerd[1485]: time="2025-08-13T00:47:07.526262420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:47:07.526429 containerd[1485]: time="2025-08-13T00:47:07.526395420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:47:07.527168 containerd[1485]: time="2025-08-13T00:47:07.527087270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:47:07.554639 systemd[1]: Started cri-containerd-14338c1ab2b31997538f178d576f0169725bc780fe498dc83a538ec5652a3ef5.scope - libcontainer container 14338c1ab2b31997538f178d576f0169725bc780fe498dc83a538ec5652a3ef5. Aug 13 00:47:07.556319 systemd[1]: Started cri-containerd-61e842aa94246e528c7ab2efba980956509cdfe6bdd4997dafd2ad6a5febe4c7.scope - libcontainer container 61e842aa94246e528c7ab2efba980956509cdfe6bdd4997dafd2ad6a5febe4c7. Aug 13 00:47:07.563483 systemd[1]: Started cri-containerd-efd54297e6cb88825d69af9394a4146c9833abf4011e3c6babfffa6e6d06a9a7.scope - libcontainer container efd54297e6cb88825d69af9394a4146c9833abf4011e3c6babfffa6e6d06a9a7. Aug 13 00:47:07.610913 containerd[1485]: time="2025-08-13T00:47:07.610821820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-29-142,Uid:d6cff6541a928c9e2060ccd00cece64c,Namespace:kube-system,Attempt:0,} returns sandbox id \"14338c1ab2b31997538f178d576f0169725bc780fe498dc83a538ec5652a3ef5\"" Aug 13 00:47:07.616639 kubelet[2232]: E0813 00:47:07.616505 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:07.620259 containerd[1485]: time="2025-08-13T00:47:07.620178370Z" level=info msg="CreateContainer within sandbox \"14338c1ab2b31997538f178d576f0169725bc780fe498dc83a538ec5652a3ef5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:47:07.624861 containerd[1485]: time="2025-08-13T00:47:07.624814400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-29-142,Uid:ef65c24b2fa2737e9b9d86d8d5dfea9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"61e842aa94246e528c7ab2efba980956509cdfe6bdd4997dafd2ad6a5febe4c7\"" Aug 13 00:47:07.626486 kubelet[2232]: E0813 00:47:07.626331 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:07.628872 containerd[1485]: time="2025-08-13T00:47:07.628706290Z" level=info msg="CreateContainer within sandbox \"61e842aa94246e528c7ab2efba980956509cdfe6bdd4997dafd2ad6a5febe4c7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:47:07.637258 containerd[1485]: time="2025-08-13T00:47:07.637230890Z" level=info msg="CreateContainer within sandbox \"14338c1ab2b31997538f178d576f0169725bc780fe498dc83a538ec5652a3ef5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"180c151477a8917afcb586e72141b84414110e2b10c668b15756c5cf661e2389\"" Aug 13 00:47:07.640483 containerd[1485]: time="2025-08-13T00:47:07.638572020Z" level=info msg="StartContainer for \"180c151477a8917afcb586e72141b84414110e2b10c668b15756c5cf661e2389\"" Aug 13 00:47:07.640543 kubelet[2232]: E0813 00:47:07.640188 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.29.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-29-142?timeout=10s\": dial tcp 172.234.29.142:6443: connect: connection refused" interval="1.6s" Aug 13 00:47:07.640592 containerd[1485]: time="2025-08-13T00:47:07.640566690Z" level=info msg="CreateContainer within sandbox \"61e842aa94246e528c7ab2efba980956509cdfe6bdd4997dafd2ad6a5febe4c7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f818fc30f4b328655713341dfddda8094d19f06b7b5bbd774cb86dea92149c59\"" Aug 13 00:47:07.640853 containerd[1485]: time="2025-08-13T00:47:07.640829190Z" level=info msg="StartContainer for \"f818fc30f4b328655713341dfddda8094d19f06b7b5bbd774cb86dea92149c59\"" Aug 13 00:47:07.660195 kubelet[2232]: W0813 00:47:07.660127 2232 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.234.29.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.234.29.142:6443: connect: connection refused Aug 13 00:47:07.660314 kubelet[2232]: E0813 00:47:07.660199 2232 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.234.29.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.29.142:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:07.668923 containerd[1485]: time="2025-08-13T00:47:07.668822000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-29-142,Uid:0accc450fe4c4db45f962fef3237e76f,Namespace:kube-system,Attempt:0,} returns sandbox id \"efd54297e6cb88825d69af9394a4146c9833abf4011e3c6babfffa6e6d06a9a7\"" Aug 13 00:47:07.670581 kubelet[2232]: E0813 00:47:07.670558 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:07.672293 containerd[1485]: time="2025-08-13T00:47:07.672273260Z" level=info msg="CreateContainer within sandbox \"efd54297e6cb88825d69af9394a4146c9833abf4011e3c6babfffa6e6d06a9a7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:47:07.681044 containerd[1485]: time="2025-08-13T00:47:07.681003590Z" level=info msg="CreateContainer within sandbox \"efd54297e6cb88825d69af9394a4146c9833abf4011e3c6babfffa6e6d06a9a7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"963670f5273dd2dcc79a7b78d1c494b1360ac94ee388a773f0fb92095c12f64d\"" Aug 13 00:47:07.681761 containerd[1485]: time="2025-08-13T00:47:07.681742860Z" level=info msg="StartContainer for \"963670f5273dd2dcc79a7b78d1c494b1360ac94ee388a773f0fb92095c12f64d\"" Aug 13 00:47:07.690008 kubelet[2232]: W0813 00:47:07.689945 2232 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.234.29.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.234.29.142:6443: connect: connection refused Aug 13 00:47:07.690800 kubelet[2232]: E0813 00:47:07.690777 2232 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.234.29.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.29.142:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:07.693963 systemd[1]: Started cri-containerd-180c151477a8917afcb586e72141b84414110e2b10c668b15756c5cf661e2389.scope - libcontainer container 180c151477a8917afcb586e72141b84414110e2b10c668b15756c5cf661e2389. Aug 13 00:47:07.703632 systemd[1]: Started cri-containerd-f818fc30f4b328655713341dfddda8094d19f06b7b5bbd774cb86dea92149c59.scope - libcontainer container f818fc30f4b328655713341dfddda8094d19f06b7b5bbd774cb86dea92149c59. Aug 13 00:47:07.725752 systemd[1]: Started cri-containerd-963670f5273dd2dcc79a7b78d1c494b1360ac94ee388a773f0fb92095c12f64d.scope - libcontainer container 963670f5273dd2dcc79a7b78d1c494b1360ac94ee388a773f0fb92095c12f64d. Aug 13 00:47:07.754288 containerd[1485]: time="2025-08-13T00:47:07.753875650Z" level=info msg="StartContainer for \"180c151477a8917afcb586e72141b84414110e2b10c668b15756c5cf661e2389\" returns successfully" Aug 13 00:47:07.805762 containerd[1485]: time="2025-08-13T00:47:07.805281200Z" level=info msg="StartContainer for \"f818fc30f4b328655713341dfddda8094d19f06b7b5bbd774cb86dea92149c59\" returns successfully" Aug 13 00:47:07.807403 containerd[1485]: time="2025-08-13T00:47:07.807362830Z" level=info msg="StartContainer for \"963670f5273dd2dcc79a7b78d1c494b1360ac94ee388a773f0fb92095c12f64d\" returns successfully" Aug 13 00:47:07.815179 kubelet[2232]: I0813 00:47:07.815155 2232 kubelet_node_status.go:75] "Attempting to register node" node="172-234-29-142" Aug 13 00:47:07.815787 kubelet[2232]: E0813 00:47:07.815754 2232 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.29.142:6443/api/v1/nodes\": dial tcp 172.234.29.142:6443: connect: connection refused" node="172-234-29-142" Aug 13 00:47:08.275011 kubelet[2232]: E0813 00:47:08.274961 2232 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-29-142\" not found" node="172-234-29-142" Aug 13 00:47:08.275424 kubelet[2232]: E0813 00:47:08.275084 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:08.275841 kubelet[2232]: E0813 00:47:08.275818 2232 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-29-142\" not found" node="172-234-29-142" Aug 13 00:47:08.275923 kubelet[2232]: E0813 00:47:08.275900 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:08.279786 kubelet[2232]: E0813 00:47:08.279763 2232 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-29-142\" not found" node="172-234-29-142" Aug 13 00:47:08.279866 kubelet[2232]: E0813 00:47:08.279843 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:09.243384 kubelet[2232]: E0813 00:47:09.243316 2232 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-234-29-142\" not found" node="172-234-29-142" Aug 13 00:47:09.280309 kubelet[2232]: E0813 00:47:09.280164 2232 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-29-142\" not found" node="172-234-29-142" Aug 13 00:47:09.280309 kubelet[2232]: E0813 00:47:09.280268 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:09.280804 kubelet[2232]: E0813 00:47:09.280634 2232 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-29-142\" not found" node="172-234-29-142" Aug 13 00:47:09.280804 kubelet[2232]: E0813 00:47:09.280702 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:09.292136 kubelet[2232]: E0813 00:47:09.292051 2232 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{172-234-29-142.185b2d0fed9a20f2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-234-29-142,UID:172-234-29-142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-234-29-142,},FirstTimestamp:2025-08-13 00:47:06.22500069 +0000 UTC m=+0.487790191,LastTimestamp:2025-08-13 00:47:06.22500069 +0000 UTC m=+0.487790191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-234-29-142,}" Aug 13 00:47:09.418136 kubelet[2232]: I0813 00:47:09.417955 2232 kubelet_node_status.go:75] "Attempting to register node" node="172-234-29-142" Aug 13 00:47:09.423962 kubelet[2232]: I0813 00:47:09.423520 2232 kubelet_node_status.go:78] "Successfully registered node" node="172-234-29-142" Aug 13 00:47:09.433118 kubelet[2232]: I0813 00:47:09.433091 2232 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:47:09.437590 kubelet[2232]: E0813 00:47:09.437130 2232 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-29-142\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:47:09.437590 kubelet[2232]: I0813 00:47:09.437149 2232 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:47:09.438230 kubelet[2232]: E0813 00:47:09.438180 2232 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-29-142\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:47:09.438307 kubelet[2232]: I0813 00:47:09.438281 2232 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:47:09.439409 kubelet[2232]: E0813 00:47:09.439395 2232 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-234-29-142\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:47:10.228978 kubelet[2232]: I0813 00:47:10.228609 2232 apiserver.go:52] "Watching apiserver" Aug 13 00:47:10.238918 kubelet[2232]: I0813 00:47:10.238886 2232 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:47:10.281639 kubelet[2232]: I0813 00:47:10.281600 2232 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:47:10.288945 kubelet[2232]: E0813 00:47:10.288911 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:11.189570 systemd[1]: Reload requested from client PID 2500 ('systemctl') (unit session-7.scope)... Aug 13 00:47:11.189993 systemd[1]: Reloading... Aug 13 00:47:11.275509 zram_generator::config[2545]: No configuration found. Aug 13 00:47:11.283244 kubelet[2232]: E0813 00:47:11.283210 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:11.408890 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:47:11.511448 systemd[1]: Reloading finished in 320 ms. Aug 13 00:47:11.535584 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:47:11.561053 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:47:11.561532 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:11.561574 systemd[1]: kubelet.service: Consumed 898ms CPU time, 133.3M memory peak. Aug 13 00:47:11.567692 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:47:11.730841 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:11.734730 (kubelet)[2596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:47:11.786604 kubelet[2596]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:47:11.786604 kubelet[2596]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:47:11.786604 kubelet[2596]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:47:11.786604 kubelet[2596]: I0813 00:47:11.785822 2596 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:47:11.801014 kubelet[2596]: I0813 00:47:11.800982 2596 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:47:11.801014 kubelet[2596]: I0813 00:47:11.801006 2596 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:47:11.801213 kubelet[2596]: I0813 00:47:11.801196 2596 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:47:11.803574 kubelet[2596]: I0813 00:47:11.803536 2596 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:47:11.807097 kubelet[2596]: I0813 00:47:11.806133 2596 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:47:11.813657 kubelet[2596]: E0813 00:47:11.813631 2596 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:47:11.813657 kubelet[2596]: I0813 00:47:11.813656 2596 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:47:11.819580 kubelet[2596]: I0813 00:47:11.819555 2596 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:47:11.819853 kubelet[2596]: I0813 00:47:11.819821 2596 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:47:11.819980 kubelet[2596]: I0813 00:47:11.819849 2596 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-29-142","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:47:11.820059 kubelet[2596]: I0813 00:47:11.819986 2596 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:47:11.820059 kubelet[2596]: I0813 00:47:11.819995 2596 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:47:11.820059 kubelet[2596]: I0813 00:47:11.820039 2596 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:47:11.820200 kubelet[2596]: I0813 00:47:11.820180 2596 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:47:11.820222 kubelet[2596]: I0813 00:47:11.820206 2596 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:47:11.821526 kubelet[2596]: I0813 00:47:11.821435 2596 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:47:11.829200 kubelet[2596]: I0813 00:47:11.828675 2596 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:47:11.830603 kubelet[2596]: I0813 00:47:11.830581 2596 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 13 00:47:11.830873 kubelet[2596]: I0813 00:47:11.830854 2596 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:47:11.831363 kubelet[2596]: I0813 00:47:11.831343 2596 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:47:11.831397 kubelet[2596]: I0813 00:47:11.831372 2596 server.go:1287] "Started kubelet" Aug 13 00:47:11.833722 kubelet[2596]: I0813 00:47:11.833070 2596 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:47:11.841475 kubelet[2596]: I0813 00:47:11.839132 2596 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:47:11.842190 kubelet[2596]: I0813 00:47:11.842175 2596 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:47:11.842860 kubelet[2596]: I0813 00:47:11.839876 2596 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:47:11.842913 kubelet[2596]: E0813 00:47:11.839968 2596 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-29-142\" not found" Aug 13 00:47:11.843003 kubelet[2596]: I0813 00:47:11.842972 2596 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:47:11.844698 kubelet[2596]: I0813 00:47:11.843384 2596 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:47:11.845168 kubelet[2596]: I0813 00:47:11.844885 2596 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:47:11.845168 kubelet[2596]: I0813 00:47:11.839868 2596 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:47:11.845239 kubelet[2596]: I0813 00:47:11.845200 2596 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:47:11.845341 kubelet[2596]: I0813 00:47:11.845329 2596 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:47:11.847318 kubelet[2596]: I0813 00:47:11.846490 2596 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:47:11.847318 kubelet[2596]: I0813 00:47:11.846520 2596 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:47:11.847318 kubelet[2596]: I0813 00:47:11.846532 2596 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:47:11.847318 kubelet[2596]: I0813 00:47:11.846539 2596 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:47:11.847318 kubelet[2596]: E0813 00:47:11.846578 2596 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:47:11.854499 kubelet[2596]: I0813 00:47:11.854019 2596 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:47:11.855815 kubelet[2596]: I0813 00:47:11.854615 2596 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:47:11.855815 kubelet[2596]: E0813 00:47:11.854809 2596 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:47:11.857108 kubelet[2596]: I0813 00:47:11.857085 2596 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:47:11.898186 kubelet[2596]: I0813 00:47:11.898165 2596 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:47:11.898186 kubelet[2596]: I0813 00:47:11.898179 2596 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:47:11.898186 kubelet[2596]: I0813 00:47:11.898194 2596 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:47:11.898671 kubelet[2596]: I0813 00:47:11.898658 2596 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:47:11.898719 kubelet[2596]: I0813 00:47:11.898672 2596 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:47:11.898719 kubelet[2596]: I0813 00:47:11.898688 2596 policy_none.go:49] "None policy: Start" Aug 13 00:47:11.898719 kubelet[2596]: I0813 00:47:11.898696 2596 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:47:11.898719 kubelet[2596]: I0813 00:47:11.898705 2596 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:47:11.898808 kubelet[2596]: I0813 00:47:11.898785 2596 state_mem.go:75] "Updated machine memory state" Aug 13 00:47:11.903038 kubelet[2596]: I0813 00:47:11.902900 2596 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:47:11.903038 kubelet[2596]: I0813 00:47:11.903035 2596 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:47:11.903112 kubelet[2596]: I0813 00:47:11.903044 2596 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:47:11.903512 kubelet[2596]: I0813 00:47:11.903473 2596 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:47:11.906136 kubelet[2596]: E0813 00:47:11.905838 2596 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:47:11.946930 kubelet[2596]: I0813 00:47:11.946913 2596 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:47:11.947039 kubelet[2596]: I0813 00:47:11.947017 2596 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:47:11.947173 kubelet[2596]: I0813 00:47:11.946928 2596 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:47:11.953189 kubelet[2596]: E0813 00:47:11.953163 2596 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-29-142\" already exists" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:47:12.005908 kubelet[2596]: I0813 00:47:12.005892 2596 kubelet_node_status.go:75] "Attempting to register node" node="172-234-29-142" Aug 13 00:47:12.012564 kubelet[2596]: I0813 00:47:12.012526 2596 kubelet_node_status.go:124] "Node was previously registered" node="172-234-29-142" Aug 13 00:47:12.012603 kubelet[2596]: I0813 00:47:12.012596 2596 kubelet_node_status.go:78] "Successfully registered node" node="172-234-29-142" Aug 13 00:47:12.046570 kubelet[2596]: I0813 00:47:12.046084 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0accc450fe4c4db45f962fef3237e76f-kubeconfig\") pod \"kube-controller-manager-172-234-29-142\" (UID: \"0accc450fe4c4db45f962fef3237e76f\") " pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:47:12.046570 kubelet[2596]: I0813 00:47:12.046265 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0accc450fe4c4db45f962fef3237e76f-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-29-142\" (UID: \"0accc450fe4c4db45f962fef3237e76f\") " pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:47:12.046570 kubelet[2596]: I0813 00:47:12.046280 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef65c24b2fa2737e9b9d86d8d5dfea9a-kubeconfig\") pod \"kube-scheduler-172-234-29-142\" (UID: \"ef65c24b2fa2737e9b9d86d8d5dfea9a\") " pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:47:12.046570 kubelet[2596]: I0813 00:47:12.046293 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d6cff6541a928c9e2060ccd00cece64c-ca-certs\") pod \"kube-apiserver-172-234-29-142\" (UID: \"d6cff6541a928c9e2060ccd00cece64c\") " pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:47:12.046570 kubelet[2596]: I0813 00:47:12.046306 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d6cff6541a928c9e2060ccd00cece64c-k8s-certs\") pod \"kube-apiserver-172-234-29-142\" (UID: \"d6cff6541a928c9e2060ccd00cece64c\") " pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:47:12.046698 kubelet[2596]: I0813 00:47:12.046320 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6cff6541a928c9e2060ccd00cece64c-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-29-142\" (UID: \"d6cff6541a928c9e2060ccd00cece64c\") " pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:47:12.046698 kubelet[2596]: I0813 00:47:12.046348 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0accc450fe4c4db45f962fef3237e76f-ca-certs\") pod \"kube-controller-manager-172-234-29-142\" (UID: \"0accc450fe4c4db45f962fef3237e76f\") " pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:47:12.046698 kubelet[2596]: I0813 00:47:12.046365 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0accc450fe4c4db45f962fef3237e76f-flexvolume-dir\") pod \"kube-controller-manager-172-234-29-142\" (UID: \"0accc450fe4c4db45f962fef3237e76f\") " pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:47:12.046698 kubelet[2596]: I0813 00:47:12.046379 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0accc450fe4c4db45f962fef3237e76f-k8s-certs\") pod \"kube-controller-manager-172-234-29-142\" (UID: \"0accc450fe4c4db45f962fef3237e76f\") " pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:47:12.185483 sudo[2632]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 00:47:12.185818 sudo[2632]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 00:47:12.253906 kubelet[2596]: E0813 00:47:12.253869 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:12.254515 kubelet[2596]: E0813 00:47:12.254494 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:12.254600 kubelet[2596]: E0813 00:47:12.254580 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:12.673212 sudo[2632]: pam_unix(sudo:session): session closed for user root Aug 13 00:47:12.830094 kubelet[2596]: I0813 00:47:12.829883 2596 apiserver.go:52] "Watching apiserver" Aug 13 00:47:12.844067 kubelet[2596]: I0813 00:47:12.844051 2596 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:47:12.878634 kubelet[2596]: E0813 00:47:12.878612 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:12.879326 kubelet[2596]: E0813 00:47:12.879305 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:12.879527 kubelet[2596]: E0813 00:47:12.879509 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:12.904465 kubelet[2596]: I0813 00:47:12.903125 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-234-29-142" podStartSLOduration=2.90301466 podStartE2EDuration="2.90301466s" podCreationTimestamp="2025-08-13 00:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:47:12.90268113 +0000 UTC m=+1.164482801" watchObservedRunningTime="2025-08-13 00:47:12.90301466 +0000 UTC m=+1.164816331" Aug 13 00:47:12.916627 kubelet[2596]: I0813 00:47:12.916597 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-234-29-142" podStartSLOduration=1.91658813 podStartE2EDuration="1.91658813s" podCreationTimestamp="2025-08-13 00:47:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:47:12.90981194 +0000 UTC m=+1.171613611" watchObservedRunningTime="2025-08-13 00:47:12.91658813 +0000 UTC m=+1.178389811" Aug 13 00:47:12.916739 kubelet[2596]: I0813 00:47:12.916708 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-234-29-142" podStartSLOduration=1.91670403 podStartE2EDuration="1.91670403s" podCreationTimestamp="2025-08-13 00:47:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:47:12.91549621 +0000 UTC m=+1.177297881" watchObservedRunningTime="2025-08-13 00:47:12.91670403 +0000 UTC m=+1.178505711" Aug 13 00:47:13.881382 kubelet[2596]: E0813 00:47:13.880697 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:13.881382 kubelet[2596]: E0813 00:47:13.881331 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:13.882145 kubelet[2596]: E0813 00:47:13.882128 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:13.975211 sudo[1701]: pam_unix(sudo:session): session closed for user root Aug 13 00:47:13.984425 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 00:47:14.027708 sshd[1700]: Connection closed by 139.178.89.65 port 35174 Aug 13 00:47:14.028373 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:14.031998 systemd[1]: sshd@6-172.234.29.142:22-139.178.89.65:35174.service: Deactivated successfully. Aug 13 00:47:14.035046 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:47:14.035433 systemd[1]: session-7.scope: Consumed 3.932s CPU time, 262.2M memory peak. Aug 13 00:47:14.037721 systemd-logind[1461]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:47:14.039394 systemd-logind[1461]: Removed session 7. Aug 13 00:47:16.673110 kubelet[2596]: I0813 00:47:16.672997 2596 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:47:16.673711 kubelet[2596]: I0813 00:47:16.673590 2596 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:47:16.673738 containerd[1485]: time="2025-08-13T00:47:16.673449530Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:47:17.595256 kubelet[2596]: E0813 00:47:17.595202 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:17.638500 systemd[1]: Created slice kubepods-besteffort-podb7cd82af_fc9c_48e2_9f40_74ff74a3f593.slice - libcontainer container kubepods-besteffort-podb7cd82af_fc9c_48e2_9f40_74ff74a3f593.slice. Aug 13 00:47:17.656748 systemd[1]: Created slice kubepods-burstable-pod5ed7fd22_d5dc_4877_8b35_3a97e246932f.slice - libcontainer container kubepods-burstable-pod5ed7fd22_d5dc_4877_8b35_3a97e246932f.slice. Aug 13 00:47:17.683236 kubelet[2596]: I0813 00:47:17.683210 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b7cd82af-fc9c-48e2-9f40-74ff74a3f593-kube-proxy\") pod \"kube-proxy-g2zwh\" (UID: \"b7cd82af-fc9c-48e2-9f40-74ff74a3f593\") " pod="kube-system/kube-proxy-g2zwh" Aug 13 00:47:17.684509 kubelet[2596]: I0813 00:47:17.683682 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-host-proc-sys-net\") pod \"cilium-8pgbq\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " pod="kube-system/cilium-8pgbq" Aug 13 00:47:17.684509 kubelet[2596]: I0813 00:47:17.683706 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n22v\" (UniqueName: \"kubernetes.io/projected/b7cd82af-fc9c-48e2-9f40-74ff74a3f593-kube-api-access-8n22v\") pod \"kube-proxy-g2zwh\" (UID: \"b7cd82af-fc9c-48e2-9f40-74ff74a3f593\") " pod="kube-system/kube-proxy-g2zwh" Aug 13 00:47:17.684509 kubelet[2596]: I0813 00:47:17.683761 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-xtables-lock\") pod \"cilium-8pgbq\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " pod="kube-system/cilium-8pgbq" Aug 13 00:47:17.684509 kubelet[2596]: I0813 00:47:17.683777 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp5gb\" (UniqueName: \"kubernetes.io/projected/5ed7fd22-d5dc-4877-8b35-3a97e246932f-kube-api-access-qp5gb\") pod \"cilium-8pgbq\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " pod="kube-system/cilium-8pgbq" Aug 13 00:47:17.684637 kubelet[2596]: I0813 00:47:17.683792 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7cd82af-fc9c-48e2-9f40-74ff74a3f593-lib-modules\") pod \"kube-proxy-g2zwh\" (UID: \"b7cd82af-fc9c-48e2-9f40-74ff74a3f593\") " pod="kube-system/kube-proxy-g2zwh" Aug 13 00:47:17.684700 kubelet[2596]: I0813 00:47:17.684687 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ed7fd22-d5dc-4877-8b35-3a97e246932f-clustermesh-secrets\") pod \"cilium-8pgbq\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " pod="kube-system/cilium-8pgbq" Aug 13 00:47:17.684779 kubelet[2596]: I0813 00:47:17.684754 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-host-proc-sys-kernel\") pod \"cilium-8pgbq\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " pod="kube-system/cilium-8pgbq" Aug 13 00:47:17.686633 kubelet[2596]: I0813 00:47:17.684868 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-etc-cni-netd\") pod \"cilium-8pgbq\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " pod="kube-system/cilium-8pgbq" Aug 13 00:47:17.686872 kubelet[2596]: I0813 00:47:17.686849 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ed7fd22-d5dc-4877-8b35-3a97e246932f-hubble-tls\") pod \"cilium-8pgbq\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " pod="kube-system/cilium-8pgbq" Aug 13 00:47:17.687178 kubelet[2596]: I0813 00:47:17.686949 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7cd82af-fc9c-48e2-9f40-74ff74a3f593-xtables-lock\") pod \"kube-proxy-g2zwh\" (UID: \"b7cd82af-fc9c-48e2-9f40-74ff74a3f593\") " pod="kube-system/kube-proxy-g2zwh" Aug 13 00:47:17.687178 kubelet[2596]: I0813 00:47:17.686986 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-hostproc\") pod \"cilium-8pgbq\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " pod="kube-system/cilium-8pgbq" Aug 13 00:47:17.687178 kubelet[2596]: I0813 00:47:17.687012 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-cilium-cgroup\") pod \"cilium-8pgbq\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " pod="kube-system/cilium-8pgbq" Aug 13 00:47:17.687178 kubelet[2596]: I0813 00:47:17.687036 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-lib-modules\") pod \"cilium-8pgbq\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " pod="kube-system/cilium-8pgbq" Aug 13 00:47:17.687178 kubelet[2596]: I0813 00:47:17.687060 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ed7fd22-d5dc-4877-8b35-3a97e246932f-cilium-config-path\") pod \"cilium-8pgbq\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " pod="kube-system/cilium-8pgbq" Aug 13 00:47:17.687178 kubelet[2596]: I0813 00:47:17.687083 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-cilium-run\") pod \"cilium-8pgbq\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " pod="kube-system/cilium-8pgbq" Aug 13 00:47:17.687355 kubelet[2596]: I0813 00:47:17.687095 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-bpf-maps\") pod \"cilium-8pgbq\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " pod="kube-system/cilium-8pgbq" Aug 13 00:47:17.687355 kubelet[2596]: I0813 00:47:17.687108 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-cni-path\") pod \"cilium-8pgbq\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " pod="kube-system/cilium-8pgbq" Aug 13 00:47:17.769141 systemd[1]: Created slice kubepods-besteffort-poddeb77b33_8127_46b4_834c_dc204d34fbcd.slice - libcontainer container kubepods-besteffort-poddeb77b33_8127_46b4_834c_dc204d34fbcd.slice. Aug 13 00:47:17.790628 kubelet[2596]: I0813 00:47:17.789081 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/deb77b33-8127-46b4-834c-dc204d34fbcd-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-gnhj4\" (UID: \"deb77b33-8127-46b4-834c-dc204d34fbcd\") " pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:47:17.790628 kubelet[2596]: I0813 00:47:17.789381 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg5zr\" (UniqueName: \"kubernetes.io/projected/deb77b33-8127-46b4-834c-dc204d34fbcd-kube-api-access-xg5zr\") pod \"cilium-operator-6c4d7847fc-gnhj4\" (UID: \"deb77b33-8127-46b4-834c-dc204d34fbcd\") " pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:47:17.885609 kubelet[2596]: E0813 00:47:17.885513 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:17.950120 kubelet[2596]: E0813 00:47:17.950069 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:17.950990 containerd[1485]: time="2025-08-13T00:47:17.950523260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g2zwh,Uid:b7cd82af-fc9c-48e2-9f40-74ff74a3f593,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:17.968299 kubelet[2596]: E0813 00:47:17.967537 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:17.969639 containerd[1485]: time="2025-08-13T00:47:17.969597560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8pgbq,Uid:5ed7fd22-d5dc-4877-8b35-3a97e246932f,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:17.976249 containerd[1485]: time="2025-08-13T00:47:17.976171550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:47:17.976969 containerd[1485]: time="2025-08-13T00:47:17.976915750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:47:17.977380 containerd[1485]: time="2025-08-13T00:47:17.977342510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:47:17.977657 containerd[1485]: time="2025-08-13T00:47:17.977618870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:47:17.999589 systemd[1]: Started cri-containerd-3e0a399f1e2042817846628d2e5f66a6d9d813a6fef98e8f8ab4d99aef4c9794.scope - libcontainer container 3e0a399f1e2042817846628d2e5f66a6d9d813a6fef98e8f8ab4d99aef4c9794. Aug 13 00:47:18.005686 containerd[1485]: time="2025-08-13T00:47:18.005515390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:47:18.008838 containerd[1485]: time="2025-08-13T00:47:18.008632920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:47:18.008838 containerd[1485]: time="2025-08-13T00:47:18.008651520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:47:18.008838 containerd[1485]: time="2025-08-13T00:47:18.008716230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:47:18.031595 systemd[1]: Started cri-containerd-c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97.scope - libcontainer container c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97. Aug 13 00:47:18.039208 containerd[1485]: time="2025-08-13T00:47:18.039135180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g2zwh,Uid:b7cd82af-fc9c-48e2-9f40-74ff74a3f593,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e0a399f1e2042817846628d2e5f66a6d9d813a6fef98e8f8ab4d99aef4c9794\"" Aug 13 00:47:18.040212 kubelet[2596]: E0813 00:47:18.040176 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:18.046284 containerd[1485]: time="2025-08-13T00:47:18.046259610Z" level=info msg="CreateContainer within sandbox \"3e0a399f1e2042817846628d2e5f66a6d9d813a6fef98e8f8ab4d99aef4c9794\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:47:18.062758 containerd[1485]: time="2025-08-13T00:47:18.062699960Z" level=info msg="CreateContainer within sandbox \"3e0a399f1e2042817846628d2e5f66a6d9d813a6fef98e8f8ab4d99aef4c9794\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dc06c8e5d958a8f8d3e758c92b7e511a416c06495680e6f45d671a1a7792a3d7\"" Aug 13 00:47:18.063305 containerd[1485]: time="2025-08-13T00:47:18.063267510Z" level=info msg="StartContainer for \"dc06c8e5d958a8f8d3e758c92b7e511a416c06495680e6f45d671a1a7792a3d7\"" Aug 13 00:47:18.072922 kubelet[2596]: E0813 00:47:18.072884 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:18.074049 containerd[1485]: time="2025-08-13T00:47:18.073888900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-gnhj4,Uid:deb77b33-8127-46b4-834c-dc204d34fbcd,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:18.082687 containerd[1485]: time="2025-08-13T00:47:18.082662930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8pgbq,Uid:5ed7fd22-d5dc-4877-8b35-3a97e246932f,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97\"" Aug 13 00:47:18.083111 kubelet[2596]: E0813 00:47:18.083092 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:18.085158 containerd[1485]: time="2025-08-13T00:47:18.085062230Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 00:47:18.105325 containerd[1485]: time="2025-08-13T00:47:18.104831840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:47:18.105325 containerd[1485]: time="2025-08-13T00:47:18.104933270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:47:18.105325 containerd[1485]: time="2025-08-13T00:47:18.104958840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:47:18.105581 containerd[1485]: time="2025-08-13T00:47:18.105538660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:47:18.115641 systemd[1]: Started cri-containerd-dc06c8e5d958a8f8d3e758c92b7e511a416c06495680e6f45d671a1a7792a3d7.scope - libcontainer container dc06c8e5d958a8f8d3e758c92b7e511a416c06495680e6f45d671a1a7792a3d7. Aug 13 00:47:18.132641 systemd[1]: Started cri-containerd-9f29907847049d37f0ecf87b8cf02ffe4497525b53a378fc53d019dc93cc9684.scope - libcontainer container 9f29907847049d37f0ecf87b8cf02ffe4497525b53a378fc53d019dc93cc9684. Aug 13 00:47:18.164872 containerd[1485]: time="2025-08-13T00:47:18.164770100Z" level=info msg="StartContainer for \"dc06c8e5d958a8f8d3e758c92b7e511a416c06495680e6f45d671a1a7792a3d7\" returns successfully" Aug 13 00:47:18.184279 containerd[1485]: time="2025-08-13T00:47:18.183677760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-gnhj4,Uid:deb77b33-8127-46b4-834c-dc204d34fbcd,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f29907847049d37f0ecf87b8cf02ffe4497525b53a378fc53d019dc93cc9684\"" Aug 13 00:47:18.184489 kubelet[2596]: E0813 00:47:18.184251 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:18.893349 kubelet[2596]: E0813 00:47:18.890810 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:18.893349 kubelet[2596]: E0813 00:47:18.891219 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:18.898106 kubelet[2596]: I0813 00:47:18.898055 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g2zwh" podStartSLOduration=1.89804544 podStartE2EDuration="1.89804544s" podCreationTimestamp="2025-08-13 00:47:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:47:18.89790853 +0000 UTC m=+7.159710201" watchObservedRunningTime="2025-08-13 00:47:18.89804544 +0000 UTC m=+7.159847121" Aug 13 00:47:19.377838 kubelet[2596]: E0813 00:47:19.377447 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:19.892507 kubelet[2596]: E0813 00:47:19.892223 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:20.893282 kubelet[2596]: E0813 00:47:20.893242 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:22.348402 systemd-timesyncd[1394]: Contacted time server [2607:f710:35::29c:0:7]:123 (2.flatcar.pool.ntp.org). Aug 13 00:47:22.348415 systemd-resolved[1392]: Clock change detected. Flushing caches. Aug 13 00:47:22.348454 systemd-timesyncd[1394]: Initial clock synchronization to Wed 2025-08-13 00:47:22.348111 UTC. Aug 13 00:47:22.612193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2564219361.mount: Deactivated successfully. Aug 13 00:47:24.104806 containerd[1485]: time="2025-08-13T00:47:24.104739764Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:24.105757 containerd[1485]: time="2025-08-13T00:47:24.105563874Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 13 00:47:24.106696 containerd[1485]: time="2025-08-13T00:47:24.106143074Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:24.107724 containerd[1485]: time="2025-08-13T00:47:24.107513324Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.06317559s" Aug 13 00:47:24.107724 containerd[1485]: time="2025-08-13T00:47:24.107548264Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 00:47:24.109004 containerd[1485]: time="2025-08-13T00:47:24.108827654Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:47:24.111004 containerd[1485]: time="2025-08-13T00:47:24.110896654Z" level=info msg="CreateContainer within sandbox \"c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:47:24.121835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1337011371.mount: Deactivated successfully. Aug 13 00:47:24.123106 containerd[1485]: time="2025-08-13T00:47:24.123082314Z" level=info msg="CreateContainer within sandbox \"c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"459c464e4a97d7599f95f7a05e7cbdab4e1a6daa3318b768c6b8eff1b3028bc6\"" Aug 13 00:47:24.123590 containerd[1485]: time="2025-08-13T00:47:24.123560804Z" level=info msg="StartContainer for \"459c464e4a97d7599f95f7a05e7cbdab4e1a6daa3318b768c6b8eff1b3028bc6\"" Aug 13 00:47:24.147895 kubelet[2596]: E0813 00:47:24.147207 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:24.158793 systemd[1]: Started cri-containerd-459c464e4a97d7599f95f7a05e7cbdab4e1a6daa3318b768c6b8eff1b3028bc6.scope - libcontainer container 459c464e4a97d7599f95f7a05e7cbdab4e1a6daa3318b768c6b8eff1b3028bc6. Aug 13 00:47:24.182457 containerd[1485]: time="2025-08-13T00:47:24.182425514Z" level=info msg="StartContainer for \"459c464e4a97d7599f95f7a05e7cbdab4e1a6daa3318b768c6b8eff1b3028bc6\" returns successfully" Aug 13 00:47:24.196240 systemd[1]: cri-containerd-459c464e4a97d7599f95f7a05e7cbdab4e1a6daa3318b768c6b8eff1b3028bc6.scope: Deactivated successfully. Aug 13 00:47:24.249513 containerd[1485]: time="2025-08-13T00:47:24.249443114Z" level=info msg="shim disconnected" id=459c464e4a97d7599f95f7a05e7cbdab4e1a6daa3318b768c6b8eff1b3028bc6 namespace=k8s.io Aug 13 00:47:24.249513 containerd[1485]: time="2025-08-13T00:47:24.249483554Z" level=warning msg="cleaning up after shim disconnected" id=459c464e4a97d7599f95f7a05e7cbdab4e1a6daa3318b768c6b8eff1b3028bc6 namespace=k8s.io Aug 13 00:47:24.249513 containerd[1485]: time="2025-08-13T00:47:24.249491914Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:47:24.857098 kubelet[2596]: E0813 00:47:24.857052 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:24.861925 containerd[1485]: time="2025-08-13T00:47:24.861860184Z" level=info msg="CreateContainer within sandbox \"c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:47:24.871551 containerd[1485]: time="2025-08-13T00:47:24.871523264Z" level=info msg="CreateContainer within sandbox \"c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6ca32e6ac460353813386cf1864695e4d9d04a6b4934bcfe2455e6e536733ceb\"" Aug 13 00:47:24.872030 containerd[1485]: time="2025-08-13T00:47:24.871986954Z" level=info msg="StartContainer for \"6ca32e6ac460353813386cf1864695e4d9d04a6b4934bcfe2455e6e536733ceb\"" Aug 13 00:47:24.911028 systemd[1]: Started cri-containerd-6ca32e6ac460353813386cf1864695e4d9d04a6b4934bcfe2455e6e536733ceb.scope - libcontainer container 6ca32e6ac460353813386cf1864695e4d9d04a6b4934bcfe2455e6e536733ceb. Aug 13 00:47:24.958888 containerd[1485]: time="2025-08-13T00:47:24.958847634Z" level=info msg="StartContainer for \"6ca32e6ac460353813386cf1864695e4d9d04a6b4934bcfe2455e6e536733ceb\" returns successfully" Aug 13 00:47:24.971871 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:47:24.972188 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:47:24.972655 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:47:24.980918 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:47:24.981115 systemd[1]: cri-containerd-6ca32e6ac460353813386cf1864695e4d9d04a6b4934bcfe2455e6e536733ceb.scope: Deactivated successfully. Aug 13 00:47:25.010165 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:47:25.014312 containerd[1485]: time="2025-08-13T00:47:25.014251474Z" level=info msg="shim disconnected" id=6ca32e6ac460353813386cf1864695e4d9d04a6b4934bcfe2455e6e536733ceb namespace=k8s.io Aug 13 00:47:25.014312 containerd[1485]: time="2025-08-13T00:47:25.014305874Z" level=warning msg="cleaning up after shim disconnected" id=6ca32e6ac460353813386cf1864695e4d9d04a6b4934bcfe2455e6e536733ceb namespace=k8s.io Aug 13 00:47:25.014312 containerd[1485]: time="2025-08-13T00:47:25.014314774Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:47:25.119188 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-459c464e4a97d7599f95f7a05e7cbdab4e1a6daa3318b768c6b8eff1b3028bc6-rootfs.mount: Deactivated successfully. Aug 13 00:47:25.319932 containerd[1485]: time="2025-08-13T00:47:25.319890394Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:25.320722 containerd[1485]: time="2025-08-13T00:47:25.320606504Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 13 00:47:25.321376 containerd[1485]: time="2025-08-13T00:47:25.321138264Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:25.322343 containerd[1485]: time="2025-08-13T00:47:25.322310154Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.21345427s" Aug 13 00:47:25.322387 containerd[1485]: time="2025-08-13T00:47:25.322341654Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 00:47:25.325493 containerd[1485]: time="2025-08-13T00:47:25.325449894Z" level=info msg="CreateContainer within sandbox \"9f29907847049d37f0ecf87b8cf02ffe4497525b53a378fc53d019dc93cc9684\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 00:47:25.342576 containerd[1485]: time="2025-08-13T00:47:25.342538874Z" level=info msg="CreateContainer within sandbox \"9f29907847049d37f0ecf87b8cf02ffe4497525b53a378fc53d019dc93cc9684\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cdde5f52978612dffe3ffe1e07fb61865d356d997a8e4c964167f8fd44ed64a6\"" Aug 13 00:47:25.346474 containerd[1485]: time="2025-08-13T00:47:25.346368924Z" level=info msg="StartContainer for \"cdde5f52978612dffe3ffe1e07fb61865d356d997a8e4c964167f8fd44ed64a6\"" Aug 13 00:47:25.376804 systemd[1]: Started cri-containerd-cdde5f52978612dffe3ffe1e07fb61865d356d997a8e4c964167f8fd44ed64a6.scope - libcontainer container cdde5f52978612dffe3ffe1e07fb61865d356d997a8e4c964167f8fd44ed64a6. Aug 13 00:47:25.405008 containerd[1485]: time="2025-08-13T00:47:25.403881644Z" level=info msg="StartContainer for \"cdde5f52978612dffe3ffe1e07fb61865d356d997a8e4c964167f8fd44ed64a6\" returns successfully" Aug 13 00:47:25.859940 kubelet[2596]: E0813 00:47:25.859908 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:25.861998 kubelet[2596]: E0813 00:47:25.861972 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:25.864948 containerd[1485]: time="2025-08-13T00:47:25.864825534Z" level=info msg="CreateContainer within sandbox \"c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:47:25.880228 containerd[1485]: time="2025-08-13T00:47:25.880190354Z" level=info msg="CreateContainer within sandbox \"c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9aa7d61d500c1a12383e3332b21acfa11fb111f34811635639610258e8751c68\"" Aug 13 00:47:25.880579 containerd[1485]: time="2025-08-13T00:47:25.880552144Z" level=info msg="StartContainer for \"9aa7d61d500c1a12383e3332b21acfa11fb111f34811635639610258e8751c68\"" Aug 13 00:47:25.931845 systemd[1]: Started cri-containerd-9aa7d61d500c1a12383e3332b21acfa11fb111f34811635639610258e8751c68.scope - libcontainer container 9aa7d61d500c1a12383e3332b21acfa11fb111f34811635639610258e8751c68. Aug 13 00:47:25.938584 kubelet[2596]: I0813 00:47:25.938277 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" podStartSLOduration=2.759419934 podStartE2EDuration="8.938264024s" podCreationTimestamp="2025-08-13 00:47:17 +0000 UTC" firstStartedPulling="2025-08-13 00:47:18.18500996 +0000 UTC m=+6.446811631" lastFinishedPulling="2025-08-13 00:47:25.323070634 +0000 UTC m=+12.625655721" observedRunningTime="2025-08-13 00:47:25.913440594 +0000 UTC m=+13.216025671" watchObservedRunningTime="2025-08-13 00:47:25.938264024 +0000 UTC m=+13.240849101" Aug 13 00:47:26.026034 containerd[1485]: time="2025-08-13T00:47:26.025972084Z" level=info msg="StartContainer for \"9aa7d61d500c1a12383e3332b21acfa11fb111f34811635639610258e8751c68\" returns successfully" Aug 13 00:47:26.040902 systemd[1]: cri-containerd-9aa7d61d500c1a12383e3332b21acfa11fb111f34811635639610258e8751c68.scope: Deactivated successfully. Aug 13 00:47:26.090386 containerd[1485]: time="2025-08-13T00:47:26.090309574Z" level=info msg="shim disconnected" id=9aa7d61d500c1a12383e3332b21acfa11fb111f34811635639610258e8751c68 namespace=k8s.io Aug 13 00:47:26.090386 containerd[1485]: time="2025-08-13T00:47:26.090376184Z" level=warning msg="cleaning up after shim disconnected" id=9aa7d61d500c1a12383e3332b21acfa11fb111f34811635639610258e8751c68 namespace=k8s.io Aug 13 00:47:26.090386 containerd[1485]: time="2025-08-13T00:47:26.090384774Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:47:26.866124 kubelet[2596]: E0813 00:47:26.866089 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:26.868505 kubelet[2596]: E0813 00:47:26.867821 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:26.870524 containerd[1485]: time="2025-08-13T00:47:26.870361374Z" level=info msg="CreateContainer within sandbox \"c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:47:26.887764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2487848342.mount: Deactivated successfully. Aug 13 00:47:26.890003 containerd[1485]: time="2025-08-13T00:47:26.889922774Z" level=info msg="CreateContainer within sandbox \"c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"32da1e52d2490f2c22fbf746ea77f1a6d15549eb1ac4083231fc43750fd78fa1\"" Aug 13 00:47:26.891210 containerd[1485]: time="2025-08-13T00:47:26.891190014Z" level=info msg="StartContainer for \"32da1e52d2490f2c22fbf746ea77f1a6d15549eb1ac4083231fc43750fd78fa1\"" Aug 13 00:47:26.942813 systemd[1]: Started cri-containerd-32da1e52d2490f2c22fbf746ea77f1a6d15549eb1ac4083231fc43750fd78fa1.scope - libcontainer container 32da1e52d2490f2c22fbf746ea77f1a6d15549eb1ac4083231fc43750fd78fa1. Aug 13 00:47:26.964798 systemd[1]: cri-containerd-32da1e52d2490f2c22fbf746ea77f1a6d15549eb1ac4083231fc43750fd78fa1.scope: Deactivated successfully. Aug 13 00:47:26.966277 containerd[1485]: time="2025-08-13T00:47:26.966232574Z" level=info msg="StartContainer for \"32da1e52d2490f2c22fbf746ea77f1a6d15549eb1ac4083231fc43750fd78fa1\" returns successfully" Aug 13 00:47:26.985575 containerd[1485]: time="2025-08-13T00:47:26.985526904Z" level=info msg="shim disconnected" id=32da1e52d2490f2c22fbf746ea77f1a6d15549eb1ac4083231fc43750fd78fa1 namespace=k8s.io Aug 13 00:47:26.985575 containerd[1485]: time="2025-08-13T00:47:26.985570374Z" level=warning msg="cleaning up after shim disconnected" id=32da1e52d2490f2c22fbf746ea77f1a6d15549eb1ac4083231fc43750fd78fa1 namespace=k8s.io Aug 13 00:47:26.985805 containerd[1485]: time="2025-08-13T00:47:26.985579074Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:47:27.119512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32da1e52d2490f2c22fbf746ea77f1a6d15549eb1ac4083231fc43750fd78fa1-rootfs.mount: Deactivated successfully. Aug 13 00:47:27.871286 kubelet[2596]: E0813 00:47:27.871236 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:27.874824 containerd[1485]: time="2025-08-13T00:47:27.874784694Z" level=info msg="CreateContainer within sandbox \"c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:47:27.896142 containerd[1485]: time="2025-08-13T00:47:27.896109554Z" level=info msg="CreateContainer within sandbox \"c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8012bcae6988aa8eba322d56fe332b2c0a9c5dd20f7b6b24d232034d4cec2f37\"" Aug 13 00:47:27.896883 containerd[1485]: time="2025-08-13T00:47:27.896843154Z" level=info msg="StartContainer for \"8012bcae6988aa8eba322d56fe332b2c0a9c5dd20f7b6b24d232034d4cec2f37\"" Aug 13 00:47:27.929801 systemd[1]: Started cri-containerd-8012bcae6988aa8eba322d56fe332b2c0a9c5dd20f7b6b24d232034d4cec2f37.scope - libcontainer container 8012bcae6988aa8eba322d56fe332b2c0a9c5dd20f7b6b24d232034d4cec2f37. Aug 13 00:47:27.959058 containerd[1485]: time="2025-08-13T00:47:27.959021904Z" level=info msg="StartContainer for \"8012bcae6988aa8eba322d56fe332b2c0a9c5dd20f7b6b24d232034d4cec2f37\" returns successfully" Aug 13 00:47:28.093525 kubelet[2596]: I0813 00:47:28.093459 2596 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 00:47:28.138176 systemd[1]: Created slice kubepods-burstable-pod100d74f6_4d22_4927_97d3_23e0803ebbc9.slice - libcontainer container kubepods-burstable-pod100d74f6_4d22_4927_97d3_23e0803ebbc9.slice. Aug 13 00:47:28.141340 kubelet[2596]: W0813 00:47:28.141291 2596 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:172-234-29-142" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-234-29-142' and this object Aug 13 00:47:28.141402 kubelet[2596]: E0813 00:47:28.141334 2596 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:172-234-29-142\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-234-29-142' and this object" logger="UnhandledError" Aug 13 00:47:28.141893 kubelet[2596]: I0813 00:47:28.141518 2596 status_manager.go:890] "Failed to get status for pod" podUID="100d74f6-4d22-4927-97d3-23e0803ebbc9" pod="kube-system/coredns-668d6bf9bc-ksr2d" err="pods \"coredns-668d6bf9bc-ksr2d\" is forbidden: User \"system:node:172-234-29-142\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-234-29-142' and this object" Aug 13 00:47:28.146206 kubelet[2596]: I0813 00:47:28.146161 2596 status_manager.go:890] "Failed to get status for pod" podUID="100d74f6-4d22-4927-97d3-23e0803ebbc9" pod="kube-system/coredns-668d6bf9bc-ksr2d" err="pods \"coredns-668d6bf9bc-ksr2d\" is forbidden: User \"system:node:172-234-29-142\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-234-29-142' and this object" Aug 13 00:47:28.147930 systemd[1]: Created slice kubepods-burstable-pod34b43726_7b78_4691_bb9f_5bc6a03df5fc.slice - libcontainer container kubepods-burstable-pod34b43726_7b78_4691_bb9f_5bc6a03df5fc.slice. Aug 13 00:47:28.208540 kubelet[2596]: I0813 00:47:28.208345 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/100d74f6-4d22-4927-97d3-23e0803ebbc9-config-volume\") pod \"coredns-668d6bf9bc-ksr2d\" (UID: \"100d74f6-4d22-4927-97d3-23e0803ebbc9\") " pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:47:28.208540 kubelet[2596]: I0813 00:47:28.208403 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp24j\" (UniqueName: \"kubernetes.io/projected/34b43726-7b78-4691-bb9f-5bc6a03df5fc-kube-api-access-jp24j\") pod \"coredns-668d6bf9bc-cnmtx\" (UID: \"34b43726-7b78-4691-bb9f-5bc6a03df5fc\") " pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:47:28.208540 kubelet[2596]: I0813 00:47:28.208431 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctlgg\" (UniqueName: \"kubernetes.io/projected/100d74f6-4d22-4927-97d3-23e0803ebbc9-kube-api-access-ctlgg\") pod \"coredns-668d6bf9bc-ksr2d\" (UID: \"100d74f6-4d22-4927-97d3-23e0803ebbc9\") " pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:47:28.208540 kubelet[2596]: I0813 00:47:28.208452 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34b43726-7b78-4691-bb9f-5bc6a03df5fc-config-volume\") pod \"coredns-668d6bf9bc-cnmtx\" (UID: \"34b43726-7b78-4691-bb9f-5bc6a03df5fc\") " pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:47:28.877600 kubelet[2596]: E0813 00:47:28.877570 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:28.890870 kubelet[2596]: I0813 00:47:28.890818 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8pgbq" podStartSLOduration=6.825235474 podStartE2EDuration="11.890802224s" podCreationTimestamp="2025-08-13 00:47:17 +0000 UTC" firstStartedPulling="2025-08-13 00:47:18.08383805 +0000 UTC m=+6.345639731" lastFinishedPulling="2025-08-13 00:47:24.108621404 +0000 UTC m=+11.411206481" observedRunningTime="2025-08-13 00:47:28.889035374 +0000 UTC m=+16.191620471" watchObservedRunningTime="2025-08-13 00:47:28.890802224 +0000 UTC m=+16.193387311" Aug 13 00:47:29.310975 kubelet[2596]: E0813 00:47:29.310768 2596 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Aug 13 00:47:29.310975 kubelet[2596]: E0813 00:47:29.310843 2596 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Aug 13 00:47:29.310975 kubelet[2596]: E0813 00:47:29.310931 2596 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/100d74f6-4d22-4927-97d3-23e0803ebbc9-config-volume podName:100d74f6-4d22-4927-97d3-23e0803ebbc9 nodeName:}" failed. No retries permitted until 2025-08-13 00:47:29.810900724 +0000 UTC m=+17.113485811 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/100d74f6-4d22-4927-97d3-23e0803ebbc9-config-volume") pod "coredns-668d6bf9bc-ksr2d" (UID: "100d74f6-4d22-4927-97d3-23e0803ebbc9") : failed to sync configmap cache: timed out waiting for the condition Aug 13 00:47:29.310975 kubelet[2596]: E0813 00:47:29.310957 2596 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/34b43726-7b78-4691-bb9f-5bc6a03df5fc-config-volume podName:34b43726-7b78-4691-bb9f-5bc6a03df5fc nodeName:}" failed. No retries permitted until 2025-08-13 00:47:29.810945964 +0000 UTC m=+17.113531051 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/34b43726-7b78-4691-bb9f-5bc6a03df5fc-config-volume") pod "coredns-668d6bf9bc-cnmtx" (UID: "34b43726-7b78-4691-bb9f-5bc6a03df5fc") : failed to sync configmap cache: timed out waiting for the condition Aug 13 00:47:29.810979 update_engine[1462]: I20250813 00:47:29.810843 1462 update_attempter.cc:509] Updating boot flags... Aug 13 00:47:29.870719 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (3398) Aug 13 00:47:29.889809 kubelet[2596]: E0813 00:47:29.888305 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:29.946921 kubelet[2596]: E0813 00:47:29.946882 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:29.949196 containerd[1485]: time="2025-08-13T00:47:29.949089484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ksr2d,Uid:100d74f6-4d22-4927-97d3-23e0803ebbc9,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:29.951038 kubelet[2596]: E0813 00:47:29.950976 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:29.951472 containerd[1485]: time="2025-08-13T00:47:29.951439694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cnmtx,Uid:34b43726-7b78-4691-bb9f-5bc6a03df5fc,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:29.986951 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (3399) Aug 13 00:47:30.081839 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (3399) Aug 13 00:47:30.195847 systemd-networkd[1389]: cilium_host: Link UP Aug 13 00:47:30.197342 systemd-networkd[1389]: cilium_net: Link UP Aug 13 00:47:30.198195 systemd-networkd[1389]: cilium_net: Gained carrier Aug 13 00:47:30.198470 systemd-networkd[1389]: cilium_host: Gained carrier Aug 13 00:47:30.316975 systemd-networkd[1389]: cilium_vxlan: Link UP Aug 13 00:47:30.316990 systemd-networkd[1389]: cilium_vxlan: Gained carrier Aug 13 00:47:30.529708 kernel: NET: Registered PF_ALG protocol family Aug 13 00:47:30.827858 systemd-networkd[1389]: cilium_host: Gained IPv6LL Aug 13 00:47:30.881424 kubelet[2596]: E0813 00:47:30.881142 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:30.955944 systemd-networkd[1389]: cilium_net: Gained IPv6LL Aug 13 00:47:31.261553 systemd-networkd[1389]: lxc_health: Link UP Aug 13 00:47:31.264115 systemd-networkd[1389]: lxc_health: Gained carrier Aug 13 00:47:31.527611 systemd-networkd[1389]: lxc02395bcc2555: Link UP Aug 13 00:47:31.530049 kernel: eth0: renamed from tmpab8b7 Aug 13 00:47:31.535369 systemd-networkd[1389]: lxc02395bcc2555: Gained carrier Aug 13 00:47:31.558704 kernel: eth0: renamed from tmp28959 Aug 13 00:47:31.563286 systemd-networkd[1389]: lxcee1f6f7d9cba: Link UP Aug 13 00:47:31.567289 systemd-networkd[1389]: lxcee1f6f7d9cba: Gained carrier Aug 13 00:47:31.851899 systemd-networkd[1389]: cilium_vxlan: Gained IPv6LL Aug 13 00:47:32.747951 systemd-networkd[1389]: lxc_health: Gained IPv6LL Aug 13 00:47:32.813553 systemd-networkd[1389]: lxc02395bcc2555: Gained IPv6LL Aug 13 00:47:32.921539 kubelet[2596]: I0813 00:47:32.921501 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:47:32.923230 kubelet[2596]: I0813 00:47:32.921718 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:47:32.925126 kubelet[2596]: I0813 00:47:32.925082 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:47:32.930344 kubelet[2596]: E0813 00:47:32.930326 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:32.948703 kubelet[2596]: I0813 00:47:32.943503 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:47:32.948703 kubelet[2596]: I0813 00:47:32.943575 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/cilium-8pgbq","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:47:32.948703 kubelet[2596]: E0813 00:47:32.943609 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:47:32.948703 kubelet[2596]: E0813 00:47:32.943618 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:47:32.948703 kubelet[2596]: E0813 00:47:32.943628 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:47:32.948703 kubelet[2596]: E0813 00:47:32.943636 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:47:32.948703 kubelet[2596]: E0813 00:47:32.943644 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:47:32.948703 kubelet[2596]: E0813 00:47:32.943651 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:47:32.948703 kubelet[2596]: E0813 00:47:32.943659 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:47:32.948932 kubelet[2596]: E0813 00:47:32.943682 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:47:32.948988 kubelet[2596]: I0813 00:47:32.948976 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:47:33.196866 systemd-networkd[1389]: lxcee1f6f7d9cba: Gained IPv6LL Aug 13 00:47:33.887138 kubelet[2596]: E0813 00:47:33.885530 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:34.673820 containerd[1485]: time="2025-08-13T00:47:34.673427774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:47:34.673820 containerd[1485]: time="2025-08-13T00:47:34.673484004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:47:34.673820 containerd[1485]: time="2025-08-13T00:47:34.673498284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:47:34.673820 containerd[1485]: time="2025-08-13T00:47:34.673560924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:47:34.709381 systemd[1]: Started cri-containerd-ab8b74928179807ede6faa49e3281f976aa518cc6f428fd73dd46893064221ba.scope - libcontainer container ab8b74928179807ede6faa49e3281f976aa518cc6f428fd73dd46893064221ba. Aug 13 00:47:34.723928 containerd[1485]: time="2025-08-13T00:47:34.723545584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:47:34.723928 containerd[1485]: time="2025-08-13T00:47:34.723586964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:47:34.723928 containerd[1485]: time="2025-08-13T00:47:34.723595194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:47:34.723928 containerd[1485]: time="2025-08-13T00:47:34.723661784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:47:34.756133 systemd[1]: Started cri-containerd-2895940a1b669a521d8f0e7446f5f2e775171b42e4a250506dc45d623b22e231.scope - libcontainer container 2895940a1b669a521d8f0e7446f5f2e775171b42e4a250506dc45d623b22e231. Aug 13 00:47:34.811969 containerd[1485]: time="2025-08-13T00:47:34.811913964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cnmtx,Uid:34b43726-7b78-4691-bb9f-5bc6a03df5fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab8b74928179807ede6faa49e3281f976aa518cc6f428fd73dd46893064221ba\"" Aug 13 00:47:34.814445 kubelet[2596]: E0813 00:47:34.813278 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:34.818469 containerd[1485]: time="2025-08-13T00:47:34.818220744Z" level=info msg="CreateContainer within sandbox \"ab8b74928179807ede6faa49e3281f976aa518cc6f428fd73dd46893064221ba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:47:34.834882 containerd[1485]: time="2025-08-13T00:47:34.834851624Z" level=info msg="CreateContainer within sandbox \"ab8b74928179807ede6faa49e3281f976aa518cc6f428fd73dd46893064221ba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2979d8ad3d610abbf63ddccbbf19667d83e7ee8394a3b17d3a0795075f0a13b5\"" Aug 13 00:47:34.836099 containerd[1485]: time="2025-08-13T00:47:34.835390524Z" level=info msg="StartContainer for \"2979d8ad3d610abbf63ddccbbf19667d83e7ee8394a3b17d3a0795075f0a13b5\"" Aug 13 00:47:34.867716 containerd[1485]: time="2025-08-13T00:47:34.867637264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ksr2d,Uid:100d74f6-4d22-4927-97d3-23e0803ebbc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"2895940a1b669a521d8f0e7446f5f2e775171b42e4a250506dc45d623b22e231\"" Aug 13 00:47:34.868555 kubelet[2596]: E0813 00:47:34.868530 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:34.873077 containerd[1485]: time="2025-08-13T00:47:34.872768744Z" level=info msg="CreateContainer within sandbox \"2895940a1b669a521d8f0e7446f5f2e775171b42e4a250506dc45d623b22e231\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:47:34.893996 kubelet[2596]: E0813 00:47:34.893474 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:34.895118 containerd[1485]: time="2025-08-13T00:47:34.895091514Z" level=info msg="CreateContainer within sandbox \"2895940a1b669a521d8f0e7446f5f2e775171b42e4a250506dc45d623b22e231\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a5a6a2bb5928f9e99861158f7da52e2b73c17186e5e83544983ba9237e76a3d9\"" Aug 13 00:47:34.895898 containerd[1485]: time="2025-08-13T00:47:34.895873954Z" level=info msg="StartContainer for \"a5a6a2bb5928f9e99861158f7da52e2b73c17186e5e83544983ba9237e76a3d9\"" Aug 13 00:47:34.901849 systemd[1]: Started cri-containerd-2979d8ad3d610abbf63ddccbbf19667d83e7ee8394a3b17d3a0795075f0a13b5.scope - libcontainer container 2979d8ad3d610abbf63ddccbbf19667d83e7ee8394a3b17d3a0795075f0a13b5. Aug 13 00:47:34.926814 systemd[1]: Started cri-containerd-a5a6a2bb5928f9e99861158f7da52e2b73c17186e5e83544983ba9237e76a3d9.scope - libcontainer container a5a6a2bb5928f9e99861158f7da52e2b73c17186e5e83544983ba9237e76a3d9. Aug 13 00:47:34.945035 containerd[1485]: time="2025-08-13T00:47:34.944969104Z" level=info msg="StartContainer for \"2979d8ad3d610abbf63ddccbbf19667d83e7ee8394a3b17d3a0795075f0a13b5\" returns successfully" Aug 13 00:47:34.969212 containerd[1485]: time="2025-08-13T00:47:34.969038484Z" level=info msg="StartContainer for \"a5a6a2bb5928f9e99861158f7da52e2b73c17186e5e83544983ba9237e76a3d9\" returns successfully" Aug 13 00:47:35.896098 kubelet[2596]: E0813 00:47:35.895479 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:35.898502 kubelet[2596]: E0813 00:47:35.898237 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:35.907181 kubelet[2596]: I0813 00:47:35.907151 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ksr2d" podStartSLOduration=18.907140744 podStartE2EDuration="18.907140744s" podCreationTimestamp="2025-08-13 00:47:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:47:35.906980484 +0000 UTC m=+23.209565571" watchObservedRunningTime="2025-08-13 00:47:35.907140744 +0000 UTC m=+23.209725821" Aug 13 00:47:36.901299 kubelet[2596]: E0813 00:47:36.900659 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:36.901299 kubelet[2596]: E0813 00:47:36.901219 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:37.902880 kubelet[2596]: E0813 00:47:37.902764 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:37.902880 kubelet[2596]: E0813 00:47:37.902781 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:47:42.966985 kubelet[2596]: I0813 00:47:42.966941 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:47:42.966985 kubelet[2596]: I0813 00:47:42.966984 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:47:42.969913 kubelet[2596]: I0813 00:47:42.969887 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:47:42.980099 kubelet[2596]: I0813 00:47:42.980068 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:47:42.980191 kubelet[2596]: I0813 00:47:42.980172 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:47:42.980219 kubelet[2596]: E0813 00:47:42.980205 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:47:42.980219 kubelet[2596]: E0813 00:47:42.980218 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:47:42.980269 kubelet[2596]: E0813 00:47:42.980227 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:47:42.980269 kubelet[2596]: E0813 00:47:42.980249 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:47:42.980269 kubelet[2596]: E0813 00:47:42.980257 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:47:42.980269 kubelet[2596]: E0813 00:47:42.980265 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:47:42.980342 kubelet[2596]: E0813 00:47:42.980273 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:47:42.980342 kubelet[2596]: E0813 00:47:42.980281 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:47:42.980342 kubelet[2596]: I0813 00:47:42.980290 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:47:52.996563 kubelet[2596]: I0813 00:47:52.996537 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:47:52.996563 kubelet[2596]: I0813 00:47:52.996568 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:47:52.999419 kubelet[2596]: I0813 00:47:52.999050 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:47:53.009517 kubelet[2596]: I0813 00:47:53.009501 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:47:53.009656 kubelet[2596]: I0813 00:47:53.009638 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:47:53.009730 kubelet[2596]: E0813 00:47:53.009690 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:47:53.009730 kubelet[2596]: E0813 00:47:53.009703 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:47:53.009730 kubelet[2596]: E0813 00:47:53.009711 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:47:53.009730 kubelet[2596]: E0813 00:47:53.009718 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:47:53.009730 kubelet[2596]: E0813 00:47:53.009726 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:47:53.009862 kubelet[2596]: E0813 00:47:53.009735 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:47:53.009862 kubelet[2596]: E0813 00:47:53.009742 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:47:53.009862 kubelet[2596]: E0813 00:47:53.009749 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:47:53.009862 kubelet[2596]: I0813 00:47:53.009759 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:48:03.023884 kubelet[2596]: I0813 00:48:03.023814 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:48:03.023884 kubelet[2596]: I0813 00:48:03.023856 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:48:03.027579 kubelet[2596]: I0813 00:48:03.025436 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:48:03.038026 kubelet[2596]: I0813 00:48:03.037998 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:48:03.038085 kubelet[2596]: I0813 00:48:03.038073 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:48:03.038114 kubelet[2596]: E0813 00:48:03.038098 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:48:03.038114 kubelet[2596]: E0813 00:48:03.038109 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:48:03.038163 kubelet[2596]: E0813 00:48:03.038117 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:48:03.038163 kubelet[2596]: E0813 00:48:03.038126 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:48:03.038163 kubelet[2596]: E0813 00:48:03.038134 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:48:03.038163 kubelet[2596]: E0813 00:48:03.038142 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:48:03.038163 kubelet[2596]: E0813 00:48:03.038150 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:48:03.038163 kubelet[2596]: E0813 00:48:03.038157 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:48:03.038291 kubelet[2596]: I0813 00:48:03.038169 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:48:13.052135 kubelet[2596]: I0813 00:48:13.052052 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:48:13.052135 kubelet[2596]: I0813 00:48:13.052090 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:48:13.053567 kubelet[2596]: I0813 00:48:13.053550 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:48:13.063904 kubelet[2596]: I0813 00:48:13.063881 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:48:13.064028 kubelet[2596]: I0813 00:48:13.064012 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:48:13.064061 kubelet[2596]: E0813 00:48:13.064045 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:48:13.064061 kubelet[2596]: E0813 00:48:13.064056 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:48:13.064126 kubelet[2596]: E0813 00:48:13.064063 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:48:13.064126 kubelet[2596]: E0813 00:48:13.064072 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:48:13.064126 kubelet[2596]: E0813 00:48:13.064081 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:48:13.064126 kubelet[2596]: E0813 00:48:13.064089 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:48:13.064126 kubelet[2596]: E0813 00:48:13.064096 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:48:13.064126 kubelet[2596]: E0813 00:48:13.064103 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:48:13.064126 kubelet[2596]: I0813 00:48:13.064112 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:48:23.081710 kubelet[2596]: I0813 00:48:23.081628 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:48:23.081710 kubelet[2596]: I0813 00:48:23.081666 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:48:23.084126 kubelet[2596]: I0813 00:48:23.083756 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:48:23.094747 kubelet[2596]: I0813 00:48:23.094722 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:48:23.094858 kubelet[2596]: I0813 00:48:23.094834 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:48:23.094897 kubelet[2596]: E0813 00:48:23.094868 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:48:23.094897 kubelet[2596]: E0813 00:48:23.094879 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:48:23.094897 kubelet[2596]: E0813 00:48:23.094887 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:48:23.094897 kubelet[2596]: E0813 00:48:23.094896 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:48:23.095015 kubelet[2596]: E0813 00:48:23.094904 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:48:23.095015 kubelet[2596]: E0813 00:48:23.094911 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:48:23.095015 kubelet[2596]: E0813 00:48:23.094920 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:48:23.095015 kubelet[2596]: E0813 00:48:23.094927 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:48:23.095015 kubelet[2596]: I0813 00:48:23.094937 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:48:31.806699 kubelet[2596]: E0813 00:48:31.806622 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:48:33.124210 kubelet[2596]: I0813 00:48:33.124156 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:48:33.124210 kubelet[2596]: I0813 00:48:33.124212 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:48:33.127286 kubelet[2596]: I0813 00:48:33.127253 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:48:33.140278 kubelet[2596]: I0813 00:48:33.140243 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:48:33.140476 kubelet[2596]: I0813 00:48:33.140452 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:48:33.140538 kubelet[2596]: E0813 00:48:33.140498 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:48:33.140538 kubelet[2596]: E0813 00:48:33.140512 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:48:33.140538 kubelet[2596]: E0813 00:48:33.140523 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:48:33.140538 kubelet[2596]: E0813 00:48:33.140537 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:48:33.140651 kubelet[2596]: E0813 00:48:33.140548 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:48:33.140651 kubelet[2596]: E0813 00:48:33.140559 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:48:33.140651 kubelet[2596]: E0813 00:48:33.140569 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:48:33.140651 kubelet[2596]: E0813 00:48:33.140579 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:48:33.140651 kubelet[2596]: I0813 00:48:33.140590 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:48:40.806969 kubelet[2596]: E0813 00:48:40.806418 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:48:40.808580 kubelet[2596]: E0813 00:48:40.807239 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:48:43.158581 kubelet[2596]: I0813 00:48:43.158502 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:48:43.158581 kubelet[2596]: I0813 00:48:43.158556 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:48:43.161085 kubelet[2596]: I0813 00:48:43.160962 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:48:43.176769 kubelet[2596]: I0813 00:48:43.176718 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:48:43.177023 kubelet[2596]: I0813 00:48:43.176884 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:48:43.177023 kubelet[2596]: E0813 00:48:43.176920 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:48:43.177023 kubelet[2596]: E0813 00:48:43.176933 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:48:43.177023 kubelet[2596]: E0813 00:48:43.176942 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:48:43.177023 kubelet[2596]: E0813 00:48:43.176952 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:48:43.177023 kubelet[2596]: E0813 00:48:43.176962 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:48:43.177023 kubelet[2596]: E0813 00:48:43.176973 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:48:43.177023 kubelet[2596]: E0813 00:48:43.176982 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:48:43.177023 kubelet[2596]: E0813 00:48:43.176992 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:48:43.177023 kubelet[2596]: I0813 00:48:43.177001 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:48:43.808045 kubelet[2596]: E0813 00:48:43.807998 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:48:45.806993 kubelet[2596]: E0813 00:48:45.806927 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:48:47.806894 kubelet[2596]: E0813 00:48:47.806816 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:48:48.807323 kubelet[2596]: E0813 00:48:48.806559 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:48:53.197029 kubelet[2596]: I0813 00:48:53.196982 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:48:53.197029 kubelet[2596]: I0813 00:48:53.197026 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:48:53.198971 kubelet[2596]: I0813 00:48:53.198592 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:48:53.210135 kubelet[2596]: I0813 00:48:53.210117 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:48:53.210252 kubelet[2596]: I0813 00:48:53.210235 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:48:53.210296 kubelet[2596]: E0813 00:48:53.210269 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:48:53.210296 kubelet[2596]: E0813 00:48:53.210282 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:48:53.210296 kubelet[2596]: E0813 00:48:53.210291 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:48:53.210359 kubelet[2596]: E0813 00:48:53.210300 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:48:53.210359 kubelet[2596]: E0813 00:48:53.210309 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:48:53.210359 kubelet[2596]: E0813 00:48:53.210317 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:48:53.210359 kubelet[2596]: E0813 00:48:53.210326 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:48:53.210359 kubelet[2596]: E0813 00:48:53.210334 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:48:53.210359 kubelet[2596]: I0813 00:48:53.210343 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:48:54.199237 systemd[1]: Started sshd@7-172.234.29.142:22-139.178.89.65:56418.service - OpenSSH per-connection server daemon (139.178.89.65:56418). Aug 13 00:48:54.526573 sshd[3985]: Accepted publickey for core from 139.178.89.65 port 56418 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:48:54.528948 sshd-session[3985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:54.534925 systemd-logind[1461]: New session 8 of user core. Aug 13 00:48:54.543827 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:48:54.855828 sshd[3987]: Connection closed by 139.178.89.65 port 56418 Aug 13 00:48:54.855570 sshd-session[3985]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:54.863162 systemd[1]: sshd@7-172.234.29.142:22-139.178.89.65:56418.service: Deactivated successfully. Aug 13 00:48:54.866311 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:48:54.868720 systemd-logind[1461]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:48:54.870048 systemd-logind[1461]: Removed session 8. Aug 13 00:48:59.916871 systemd[1]: Started sshd@8-172.234.29.142:22-139.178.89.65:51192.service - OpenSSH per-connection server daemon (139.178.89.65:51192). Aug 13 00:49:00.239714 sshd[4001]: Accepted publickey for core from 139.178.89.65 port 51192 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:49:00.241741 sshd-session[4001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:00.246767 systemd-logind[1461]: New session 9 of user core. Aug 13 00:49:00.252814 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:49:00.545444 sshd[4003]: Connection closed by 139.178.89.65 port 51192 Aug 13 00:49:00.546270 sshd-session[4001]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:00.550521 systemd-logind[1461]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:49:00.551381 systemd[1]: sshd@8-172.234.29.142:22-139.178.89.65:51192.service: Deactivated successfully. Aug 13 00:49:00.553439 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:49:00.554653 systemd-logind[1461]: Removed session 9. Aug 13 00:49:03.225538 kubelet[2596]: I0813 00:49:03.225510 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:49:03.226008 kubelet[2596]: I0813 00:49:03.225937 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:49:03.227265 kubelet[2596]: I0813 00:49:03.227249 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:49:03.237524 kubelet[2596]: I0813 00:49:03.237503 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:49:03.237641 kubelet[2596]: I0813 00:49:03.237625 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:49:03.237707 kubelet[2596]: E0813 00:49:03.237657 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:49:03.237707 kubelet[2596]: E0813 00:49:03.237692 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:49:03.237707 kubelet[2596]: E0813 00:49:03.237702 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:49:03.237792 kubelet[2596]: E0813 00:49:03.237711 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:49:03.237792 kubelet[2596]: E0813 00:49:03.237720 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:49:03.237792 kubelet[2596]: E0813 00:49:03.237728 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:49:03.237792 kubelet[2596]: E0813 00:49:03.237736 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:49:03.237792 kubelet[2596]: E0813 00:49:03.237743 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:49:03.237792 kubelet[2596]: I0813 00:49:03.237752 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:49:05.618378 systemd[1]: Started sshd@9-172.234.29.142:22-139.178.89.65:51202.service - OpenSSH per-connection server daemon (139.178.89.65:51202). Aug 13 00:49:05.806532 kubelet[2596]: E0813 00:49:05.806478 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:49:05.949510 sshd[4016]: Accepted publickey for core from 139.178.89.65 port 51202 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:49:05.950881 sshd-session[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:05.955484 systemd-logind[1461]: New session 10 of user core. Aug 13 00:49:05.963775 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:49:06.252174 sshd[4018]: Connection closed by 139.178.89.65 port 51202 Aug 13 00:49:06.253940 sshd-session[4016]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:06.258465 systemd[1]: sshd@9-172.234.29.142:22-139.178.89.65:51202.service: Deactivated successfully. Aug 13 00:49:06.260665 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:49:06.261502 systemd-logind[1461]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:49:06.262613 systemd-logind[1461]: Removed session 10. Aug 13 00:49:06.315016 systemd[1]: Started sshd@10-172.234.29.142:22-139.178.89.65:51204.service - OpenSSH per-connection server daemon (139.178.89.65:51204). Aug 13 00:49:06.637831 sshd[4031]: Accepted publickey for core from 139.178.89.65 port 51204 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:49:06.638310 sshd-session[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:06.643020 systemd-logind[1461]: New session 11 of user core. Aug 13 00:49:06.651796 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:49:06.982291 sshd[4033]: Connection closed by 139.178.89.65 port 51204 Aug 13 00:49:06.982963 sshd-session[4031]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:06.985758 systemd-logind[1461]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:49:06.986465 systemd[1]: sshd@10-172.234.29.142:22-139.178.89.65:51204.service: Deactivated successfully. Aug 13 00:49:06.988565 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:49:06.990587 systemd-logind[1461]: Removed session 11. Aug 13 00:49:07.047024 systemd[1]: Started sshd@11-172.234.29.142:22-139.178.89.65:51216.service - OpenSSH per-connection server daemon (139.178.89.65:51216). Aug 13 00:49:07.374001 sshd[4043]: Accepted publickey for core from 139.178.89.65 port 51216 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:49:07.375600 sshd-session[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:07.380311 systemd-logind[1461]: New session 12 of user core. Aug 13 00:49:07.387816 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:49:07.679189 sshd[4045]: Connection closed by 139.178.89.65 port 51216 Aug 13 00:49:07.679756 sshd-session[4043]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:07.683313 systemd-logind[1461]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:49:07.684269 systemd[1]: sshd@11-172.234.29.142:22-139.178.89.65:51216.service: Deactivated successfully. Aug 13 00:49:07.686220 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:49:07.687538 systemd-logind[1461]: Removed session 12. Aug 13 00:49:12.752449 systemd[1]: Started sshd@12-172.234.29.142:22-139.178.89.65:38402.service - OpenSSH per-connection server daemon (139.178.89.65:38402). Aug 13 00:49:13.087208 sshd[4057]: Accepted publickey for core from 139.178.89.65 port 38402 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:49:13.088604 sshd-session[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:13.094541 systemd-logind[1461]: New session 13 of user core. Aug 13 00:49:13.099833 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:49:13.254052 kubelet[2596]: I0813 00:49:13.253995 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:49:13.254052 kubelet[2596]: I0813 00:49:13.254036 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:49:13.255860 kubelet[2596]: I0813 00:49:13.255813 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:49:13.258401 kubelet[2596]: I0813 00:49:13.258373 2596 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" size=57680541 runtimeHandler="" Aug 13 00:49:13.258802 containerd[1485]: time="2025-08-13T00:49:13.258763683Z" level=info msg="RemoveImage \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 00:49:13.261349 containerd[1485]: time="2025-08-13T00:49:13.261164191Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 00:49:13.262040 containerd[1485]: time="2025-08-13T00:49:13.262011475Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\"" Aug 13 00:49:13.262917 containerd[1485]: time="2025-08-13T00:49:13.262567061Z" level=info msg="ImageDelete event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 00:49:13.343337 containerd[1485]: time="2025-08-13T00:49:13.342712755Z" level=info msg="RemoveImage \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" returns successfully" Aug 13 00:49:13.343439 kubelet[2596]: I0813 00:49:13.343043 2596 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136" size=320368 runtimeHandler="" Aug 13 00:49:13.344217 containerd[1485]: time="2025-08-13T00:49:13.343838427Z" level=info msg="RemoveImage \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:49:13.344936 containerd[1485]: time="2025-08-13T00:49:13.344882877Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.10\"" Aug 13 00:49:13.345372 containerd[1485]: time="2025-08-13T00:49:13.345303469Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\"" Aug 13 00:49:13.346325 containerd[1485]: time="2025-08-13T00:49:13.345765822Z" level=info msg="ImageDelete event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:49:13.349627 containerd[1485]: time="2025-08-13T00:49:13.349599731Z" level=info msg="RemoveImage \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" returns successfully" Aug 13 00:49:13.366146 kubelet[2596]: I0813 00:49:13.366125 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:49:13.366411 kubelet[2596]: I0813 00:49:13.366384 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:49:13.366494 kubelet[2596]: E0813 00:49:13.366482 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:49:13.366574 kubelet[2596]: E0813 00:49:13.366563 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:49:13.366641 kubelet[2596]: E0813 00:49:13.366632 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:49:13.366716 kubelet[2596]: E0813 00:49:13.366706 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:49:13.366787 kubelet[2596]: E0813 00:49:13.366779 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:49:13.366832 kubelet[2596]: E0813 00:49:13.366824 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:49:13.366935 kubelet[2596]: E0813 00:49:13.366897 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:49:13.366935 kubelet[2596]: E0813 00:49:13.366912 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:49:13.367030 kubelet[2596]: I0813 00:49:13.367015 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:49:13.422277 sshd[4061]: Connection closed by 139.178.89.65 port 38402 Aug 13 00:49:13.423053 sshd-session[4057]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:13.426457 systemd[1]: sshd@12-172.234.29.142:22-139.178.89.65:38402.service: Deactivated successfully. Aug 13 00:49:13.429085 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:49:13.431437 systemd-logind[1461]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:49:13.432647 systemd-logind[1461]: Removed session 13. Aug 13 00:49:13.810360 update_engine[1462]: I20250813 00:49:13.810299 1462 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Aug 13 00:49:13.810360 update_engine[1462]: I20250813 00:49:13.810355 1462 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Aug 13 00:49:13.810804 update_engine[1462]: I20250813 00:49:13.810549 1462 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Aug 13 00:49:13.811088 update_engine[1462]: I20250813 00:49:13.811059 1462 omaha_request_params.cc:62] Current group set to stable Aug 13 00:49:13.811207 update_engine[1462]: I20250813 00:49:13.811187 1462 update_attempter.cc:499] Already updated boot flags. Skipping. Aug 13 00:49:13.811775 update_engine[1462]: I20250813 00:49:13.811259 1462 update_attempter.cc:643] Scheduling an action processor start. Aug 13 00:49:13.811775 update_engine[1462]: I20250813 00:49:13.811283 1462 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 00:49:13.811775 update_engine[1462]: I20250813 00:49:13.811313 1462 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Aug 13 00:49:13.811775 update_engine[1462]: I20250813 00:49:13.811390 1462 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 00:49:13.811775 update_engine[1462]: I20250813 00:49:13.811400 1462 omaha_request_action.cc:272] Request: Aug 13 00:49:13.811775 update_engine[1462]: Aug 13 00:49:13.811775 update_engine[1462]: Aug 13 00:49:13.811775 update_engine[1462]: Aug 13 00:49:13.811775 update_engine[1462]: Aug 13 00:49:13.811775 update_engine[1462]: Aug 13 00:49:13.811775 update_engine[1462]: Aug 13 00:49:13.811775 update_engine[1462]: Aug 13 00:49:13.811775 update_engine[1462]: Aug 13 00:49:13.811775 update_engine[1462]: I20250813 00:49:13.811408 1462 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:49:13.812055 locksmithd[1489]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Aug 13 00:49:13.813197 update_engine[1462]: I20250813 00:49:13.813169 1462 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:49:13.813577 update_engine[1462]: I20250813 00:49:13.813531 1462 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:49:13.835705 update_engine[1462]: E20250813 00:49:13.835642 1462 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:49:13.835892 update_engine[1462]: I20250813 00:49:13.835748 1462 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Aug 13 00:49:18.489917 systemd[1]: Started sshd@13-172.234.29.142:22-139.178.89.65:38406.service - OpenSSH per-connection server daemon (139.178.89.65:38406). Aug 13 00:49:18.820133 sshd[4073]: Accepted publickey for core from 139.178.89.65 port 38406 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:49:18.822147 sshd-session[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:18.827207 systemd-logind[1461]: New session 14 of user core. Aug 13 00:49:18.836802 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:49:19.130923 sshd[4075]: Connection closed by 139.178.89.65 port 38406 Aug 13 00:49:19.132916 sshd-session[4073]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:19.137040 systemd-logind[1461]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:49:19.137993 systemd[1]: sshd@13-172.234.29.142:22-139.178.89.65:38406.service: Deactivated successfully. Aug 13 00:49:19.140474 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:49:19.141687 systemd-logind[1461]: Removed session 14. Aug 13 00:49:19.192919 systemd[1]: Started sshd@14-172.234.29.142:22-139.178.89.65:39802.service - OpenSSH per-connection server daemon (139.178.89.65:39802). Aug 13 00:49:19.512760 sshd[4088]: Accepted publickey for core from 139.178.89.65 port 39802 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:49:19.514429 sshd-session[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:19.519609 systemd-logind[1461]: New session 15 of user core. Aug 13 00:49:19.525803 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:49:19.848595 sshd[4092]: Connection closed by 139.178.89.65 port 39802 Aug 13 00:49:19.849239 sshd-session[4088]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:19.856232 systemd[1]: sshd@14-172.234.29.142:22-139.178.89.65:39802.service: Deactivated successfully. Aug 13 00:49:19.858616 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:49:19.859506 systemd-logind[1461]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:49:19.861146 systemd-logind[1461]: Removed session 15. Aug 13 00:49:19.914991 systemd[1]: Started sshd@15-172.234.29.142:22-139.178.89.65:39818.service - OpenSSH per-connection server daemon (139.178.89.65:39818). Aug 13 00:49:20.249424 sshd[4102]: Accepted publickey for core from 139.178.89.65 port 39818 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:49:20.251101 sshd-session[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:20.256450 systemd-logind[1461]: New session 16 of user core. Aug 13 00:49:20.261798 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:49:21.021085 sshd[4104]: Connection closed by 139.178.89.65 port 39818 Aug 13 00:49:21.022105 sshd-session[4102]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:21.026181 systemd[1]: sshd@15-172.234.29.142:22-139.178.89.65:39818.service: Deactivated successfully. Aug 13 00:49:21.028351 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:49:21.029295 systemd-logind[1461]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:49:21.030200 systemd-logind[1461]: Removed session 16. Aug 13 00:49:21.087911 systemd[1]: Started sshd@16-172.234.29.142:22-139.178.89.65:39822.service - OpenSSH per-connection server daemon (139.178.89.65:39822). Aug 13 00:49:21.413529 sshd[4120]: Accepted publickey for core from 139.178.89.65 port 39822 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:49:21.415479 sshd-session[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:21.420271 systemd-logind[1461]: New session 17 of user core. Aug 13 00:49:21.430809 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:49:21.823442 sshd[4122]: Connection closed by 139.178.89.65 port 39822 Aug 13 00:49:21.824280 sshd-session[4120]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:21.828627 systemd-logind[1461]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:49:21.829066 systemd[1]: sshd@16-172.234.29.142:22-139.178.89.65:39822.service: Deactivated successfully. Aug 13 00:49:21.831496 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:49:21.832470 systemd-logind[1461]: Removed session 17. Aug 13 00:49:21.887780 systemd[1]: Started sshd@17-172.234.29.142:22-139.178.89.65:39826.service - OpenSSH per-connection server daemon (139.178.89.65:39826). Aug 13 00:49:22.221600 sshd[4132]: Accepted publickey for core from 139.178.89.65 port 39826 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:49:22.223488 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:22.228887 systemd-logind[1461]: New session 18 of user core. Aug 13 00:49:22.235822 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:49:22.530978 sshd[4134]: Connection closed by 139.178.89.65 port 39826 Aug 13 00:49:22.531999 sshd-session[4132]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:22.537779 systemd[1]: sshd@17-172.234.29.142:22-139.178.89.65:39826.service: Deactivated successfully. Aug 13 00:49:22.540251 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:49:22.541275 systemd-logind[1461]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:49:22.542184 systemd-logind[1461]: Removed session 18. Aug 13 00:49:23.382547 kubelet[2596]: I0813 00:49:23.382506 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:49:23.382547 kubelet[2596]: I0813 00:49:23.382542 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:49:23.384704 kubelet[2596]: I0813 00:49:23.384362 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:49:23.399136 kubelet[2596]: I0813 00:49:23.399101 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:49:23.399250 kubelet[2596]: I0813 00:49:23.399226 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:49:23.399277 kubelet[2596]: E0813 00:49:23.399265 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:49:23.399299 kubelet[2596]: E0813 00:49:23.399281 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:49:23.399299 kubelet[2596]: E0813 00:49:23.399293 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:49:23.399370 kubelet[2596]: E0813 00:49:23.399308 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:49:23.399370 kubelet[2596]: E0813 00:49:23.399320 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:49:23.399370 kubelet[2596]: E0813 00:49:23.399328 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:49:23.399370 kubelet[2596]: E0813 00:49:23.399337 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:49:23.399370 kubelet[2596]: E0813 00:49:23.399345 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:49:23.399370 kubelet[2596]: I0813 00:49:23.399354 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:49:23.811233 update_engine[1462]: I20250813 00:49:23.811126 1462 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:49:23.811737 update_engine[1462]: I20250813 00:49:23.811498 1462 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:49:23.811893 update_engine[1462]: I20250813 00:49:23.811854 1462 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:49:23.812836 update_engine[1462]: E20250813 00:49:23.812737 1462 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:49:23.812836 update_engine[1462]: I20250813 00:49:23.812805 1462 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Aug 13 00:49:27.596940 systemd[1]: Started sshd@18-172.234.29.142:22-139.178.89.65:39838.service - OpenSSH per-connection server daemon (139.178.89.65:39838). Aug 13 00:49:27.924230 sshd[4148]: Accepted publickey for core from 139.178.89.65 port 39838 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:49:27.925914 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:27.930130 systemd-logind[1461]: New session 19 of user core. Aug 13 00:49:27.937829 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:49:28.246285 sshd[4150]: Connection closed by 139.178.89.65 port 39838 Aug 13 00:49:28.247652 sshd-session[4148]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:28.255697 systemd[1]: sshd@18-172.234.29.142:22-139.178.89.65:39838.service: Deactivated successfully. Aug 13 00:49:28.261214 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:49:28.263129 systemd-logind[1461]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:49:28.264574 systemd-logind[1461]: Removed session 19. Aug 13 00:49:33.320145 systemd[1]: Started sshd@19-172.234.29.142:22-139.178.89.65:45876.service - OpenSSH per-connection server daemon (139.178.89.65:45876). Aug 13 00:49:33.421761 kubelet[2596]: I0813 00:49:33.421719 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:49:33.421761 kubelet[2596]: I0813 00:49:33.421763 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:49:33.424443 kubelet[2596]: I0813 00:49:33.424380 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:49:33.436270 kubelet[2596]: I0813 00:49:33.436022 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:49:33.436270 kubelet[2596]: I0813 00:49:33.436151 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:49:33.436270 kubelet[2596]: E0813 00:49:33.436180 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:49:33.436270 kubelet[2596]: E0813 00:49:33.436191 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:49:33.436270 kubelet[2596]: E0813 00:49:33.436200 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:49:33.436270 kubelet[2596]: E0813 00:49:33.436209 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:49:33.436270 kubelet[2596]: E0813 00:49:33.436217 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:49:33.436270 kubelet[2596]: E0813 00:49:33.436226 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:49:33.436270 kubelet[2596]: E0813 00:49:33.436234 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:49:33.436270 kubelet[2596]: E0813 00:49:33.436246 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:49:33.436270 kubelet[2596]: I0813 00:49:33.436256 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:49:33.650785 sshd[4161]: Accepted publickey for core from 139.178.89.65 port 45876 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:49:33.653142 sshd-session[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:33.658276 systemd-logind[1461]: New session 20 of user core. Aug 13 00:49:33.662779 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:49:33.810660 update_engine[1462]: I20250813 00:49:33.809785 1462 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:49:33.810660 update_engine[1462]: I20250813 00:49:33.810122 1462 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:49:33.810660 update_engine[1462]: I20250813 00:49:33.810401 1462 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:49:33.811256 update_engine[1462]: E20250813 00:49:33.811126 1462 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:49:33.811256 update_engine[1462]: I20250813 00:49:33.811174 1462 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Aug 13 00:49:33.975248 sshd[4163]: Connection closed by 139.178.89.65 port 45876 Aug 13 00:49:33.976448 sshd-session[4161]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:33.982930 systemd[1]: sshd@19-172.234.29.142:22-139.178.89.65:45876.service: Deactivated successfully. Aug 13 00:49:33.985321 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:49:33.986459 systemd-logind[1461]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:49:33.987442 systemd-logind[1461]: Removed session 20. Aug 13 00:49:39.044074 systemd[1]: Started sshd@20-172.234.29.142:22-139.178.89.65:46044.service - OpenSSH per-connection server daemon (139.178.89.65:46044). Aug 13 00:49:39.360868 sshd[4175]: Accepted publickey for core from 139.178.89.65 port 46044 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:49:39.363042 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:39.369234 systemd-logind[1461]: New session 21 of user core. Aug 13 00:49:39.373909 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:49:39.662839 sshd[4177]: Connection closed by 139.178.89.65 port 46044 Aug 13 00:49:39.663711 sshd-session[4175]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:39.668930 systemd[1]: sshd@20-172.234.29.142:22-139.178.89.65:46044.service: Deactivated successfully. Aug 13 00:49:39.672105 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:49:39.673164 systemd-logind[1461]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:49:39.674589 systemd-logind[1461]: Removed session 21. Aug 13 00:49:43.458170 kubelet[2596]: I0813 00:49:43.455296 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:49:43.458170 kubelet[2596]: I0813 00:49:43.455339 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:49:43.458170 kubelet[2596]: I0813 00:49:43.457646 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:49:43.471050 kubelet[2596]: I0813 00:49:43.471032 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:49:43.471149 kubelet[2596]: I0813 00:49:43.471132 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:49:43.471176 kubelet[2596]: E0813 00:49:43.471167 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:49:43.471206 kubelet[2596]: E0813 00:49:43.471178 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:49:43.471206 kubelet[2596]: E0813 00:49:43.471187 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:49:43.471206 kubelet[2596]: E0813 00:49:43.471197 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:49:43.471206 kubelet[2596]: E0813 00:49:43.471205 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:49:43.471304 kubelet[2596]: E0813 00:49:43.471213 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:49:43.471304 kubelet[2596]: E0813 00:49:43.471221 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:49:43.471304 kubelet[2596]: E0813 00:49:43.471230 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:49:43.471304 kubelet[2596]: I0813 00:49:43.471240 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:49:43.811761 update_engine[1462]: I20250813 00:49:43.811508 1462 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:49:43.812305 update_engine[1462]: I20250813 00:49:43.811893 1462 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:49:43.812305 update_engine[1462]: I20250813 00:49:43.812214 1462 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:49:43.813324 update_engine[1462]: E20250813 00:49:43.813136 1462 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:49:43.813324 update_engine[1462]: I20250813 00:49:43.813192 1462 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 00:49:43.813324 update_engine[1462]: I20250813 00:49:43.813202 1462 omaha_request_action.cc:617] Omaha request response: Aug 13 00:49:43.813324 update_engine[1462]: E20250813 00:49:43.813294 1462 omaha_request_action.cc:636] Omaha request network transfer failed. Aug 13 00:49:43.813324 update_engine[1462]: I20250813 00:49:43.813325 1462 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Aug 13 00:49:43.813324 update_engine[1462]: I20250813 00:49:43.813334 1462 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 00:49:43.813324 update_engine[1462]: I20250813 00:49:43.813340 1462 update_attempter.cc:306] Processing Done. Aug 13 00:49:43.813739 update_engine[1462]: E20250813 00:49:43.813364 1462 update_attempter.cc:619] Update failed. Aug 13 00:49:43.813739 update_engine[1462]: I20250813 00:49:43.813373 1462 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Aug 13 00:49:43.813739 update_engine[1462]: I20250813 00:49:43.813380 1462 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Aug 13 00:49:43.813739 update_engine[1462]: I20250813 00:49:43.813387 1462 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Aug 13 00:49:43.813739 update_engine[1462]: I20250813 00:49:43.813479 1462 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 00:49:43.813739 update_engine[1462]: I20250813 00:49:43.813501 1462 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 00:49:43.813739 update_engine[1462]: I20250813 00:49:43.813508 1462 omaha_request_action.cc:272] Request: Aug 13 00:49:43.813739 update_engine[1462]: Aug 13 00:49:43.813739 update_engine[1462]: Aug 13 00:49:43.813739 update_engine[1462]: Aug 13 00:49:43.813739 update_engine[1462]: Aug 13 00:49:43.813739 update_engine[1462]: Aug 13 00:49:43.813739 update_engine[1462]: Aug 13 00:49:43.813739 update_engine[1462]: I20250813 00:49:43.813516 1462 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:49:43.813739 update_engine[1462]: I20250813 00:49:43.813664 1462 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:49:43.813988 update_engine[1462]: I20250813 00:49:43.813853 1462 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:49:43.814270 locksmithd[1489]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Aug 13 00:49:43.814696 update_engine[1462]: E20250813 00:49:43.814625 1462 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:49:43.814801 update_engine[1462]: I20250813 00:49:43.814766 1462 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 00:49:43.814801 update_engine[1462]: I20250813 00:49:43.814786 1462 omaha_request_action.cc:617] Omaha request response: Aug 13 00:49:43.814843 update_engine[1462]: I20250813 00:49:43.814800 1462 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 00:49:43.814843 update_engine[1462]: I20250813 00:49:43.814807 1462 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 00:49:43.814843 update_engine[1462]: I20250813 00:49:43.814814 1462 update_attempter.cc:306] Processing Done. Aug 13 00:49:43.814843 update_engine[1462]: I20250813 00:49:43.814822 1462 update_attempter.cc:310] Error event sent. Aug 13 00:49:43.814843 update_engine[1462]: I20250813 00:49:43.814839 1462 update_check_scheduler.cc:74] Next update check in 40m11s Aug 13 00:49:43.815145 locksmithd[1489]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Aug 13 00:49:44.734073 systemd[1]: Started sshd@21-172.234.29.142:22-139.178.89.65:46056.service - OpenSSH per-connection server daemon (139.178.89.65:46056). Aug 13 00:49:45.061466 sshd[4189]: Accepted publickey for core from 139.178.89.65 port 46056 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:49:45.063054 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:45.067755 systemd-logind[1461]: New session 22 of user core. Aug 13 00:49:45.070809 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:49:45.361666 sshd[4193]: Connection closed by 139.178.89.65 port 46056 Aug 13 00:49:45.363155 sshd-session[4189]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:45.366746 systemd[1]: sshd@21-172.234.29.142:22-139.178.89.65:46056.service: Deactivated successfully. Aug 13 00:49:45.369170 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:49:45.369930 systemd-logind[1461]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:49:45.370772 systemd-logind[1461]: Removed session 22. Aug 13 00:49:48.807713 kubelet[2596]: E0813 00:49:48.807348 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:49:50.427880 systemd[1]: Started sshd@22-172.234.29.142:22-139.178.89.65:51332.service - OpenSSH per-connection server daemon (139.178.89.65:51332). Aug 13 00:49:50.753321 sshd[4208]: Accepted publickey for core from 139.178.89.65 port 51332 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:49:50.754903 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:50.760250 systemd-logind[1461]: New session 23 of user core. Aug 13 00:49:50.764786 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:49:51.044542 sshd[4210]: Connection closed by 139.178.89.65 port 51332 Aug 13 00:49:51.044950 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:51.049281 systemd[1]: sshd@22-172.234.29.142:22-139.178.89.65:51332.service: Deactivated successfully. Aug 13 00:49:51.051393 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:49:51.052185 systemd-logind[1461]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:49:51.053251 systemd-logind[1461]: Removed session 23. Aug 13 00:49:53.491025 kubelet[2596]: I0813 00:49:53.490982 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:49:53.491025 kubelet[2596]: I0813 00:49:53.491027 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:49:53.493884 kubelet[2596]: I0813 00:49:53.493335 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:49:53.507160 kubelet[2596]: I0813 00:49:53.507122 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:49:53.507306 kubelet[2596]: I0813 00:49:53.507256 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:49:53.507306 kubelet[2596]: E0813 00:49:53.507288 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:49:53.507306 kubelet[2596]: E0813 00:49:53.507299 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:49:53.507306 kubelet[2596]: E0813 00:49:53.507308 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:49:53.507426 kubelet[2596]: E0813 00:49:53.507319 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:49:53.507426 kubelet[2596]: E0813 00:49:53.507327 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:49:53.507426 kubelet[2596]: E0813 00:49:53.507335 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:49:53.507426 kubelet[2596]: E0813 00:49:53.507343 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:49:53.507426 kubelet[2596]: E0813 00:49:53.507351 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:49:53.507426 kubelet[2596]: I0813 00:49:53.507360 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:49:56.111901 systemd[1]: Started sshd@23-172.234.29.142:22-139.178.89.65:51342.service - OpenSSH per-connection server daemon (139.178.89.65:51342). Aug 13 00:49:56.439195 sshd[4221]: Accepted publickey for core from 139.178.89.65 port 51342 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:49:56.441280 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:56.447161 systemd-logind[1461]: New session 24 of user core. Aug 13 00:49:56.450818 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 00:49:56.743693 sshd[4223]: Connection closed by 139.178.89.65 port 51342 Aug 13 00:49:56.744684 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:56.749869 systemd-logind[1461]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:49:56.750831 systemd[1]: sshd@23-172.234.29.142:22-139.178.89.65:51342.service: Deactivated successfully. Aug 13 00:49:56.753107 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:49:56.754720 systemd-logind[1461]: Removed session 24. Aug 13 00:49:56.807727 kubelet[2596]: E0813 00:49:56.806708 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:49:58.807614 kubelet[2596]: E0813 00:49:58.806614 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:50:01.807848 kubelet[2596]: E0813 00:50:01.807804 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:50:01.813214 systemd[1]: Started sshd@24-172.234.29.142:22-139.178.89.65:34092.service - OpenSSH per-connection server daemon (139.178.89.65:34092). Aug 13 00:50:02.130322 sshd[4234]: Accepted publickey for core from 139.178.89.65 port 34092 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:50:02.132449 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:50:02.137298 systemd-logind[1461]: New session 25 of user core. Aug 13 00:50:02.140851 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 00:50:02.432609 sshd[4236]: Connection closed by 139.178.89.65 port 34092 Aug 13 00:50:02.433221 sshd-session[4234]: pam_unix(sshd:session): session closed for user core Aug 13 00:50:02.437266 systemd[1]: sshd@24-172.234.29.142:22-139.178.89.65:34092.service: Deactivated successfully. Aug 13 00:50:02.439049 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:50:02.440065 systemd-logind[1461]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:50:02.441310 systemd-logind[1461]: Removed session 25. Aug 13 00:50:03.523440 kubelet[2596]: I0813 00:50:03.523407 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:50:03.523440 kubelet[2596]: I0813 00:50:03.523443 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:50:03.525733 kubelet[2596]: I0813 00:50:03.525612 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:50:03.535281 kubelet[2596]: I0813 00:50:03.535250 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:50:03.535358 kubelet[2596]: I0813 00:50:03.535342 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:50:03.535387 kubelet[2596]: E0813 00:50:03.535370 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:50:03.535387 kubelet[2596]: E0813 00:50:03.535381 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:50:03.535449 kubelet[2596]: E0813 00:50:03.535389 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:50:03.535449 kubelet[2596]: E0813 00:50:03.535398 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:50:03.535449 kubelet[2596]: E0813 00:50:03.535405 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:50:03.535449 kubelet[2596]: E0813 00:50:03.535413 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:50:03.535449 kubelet[2596]: E0813 00:50:03.535421 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:50:03.535449 kubelet[2596]: E0813 00:50:03.535428 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:50:03.535449 kubelet[2596]: I0813 00:50:03.535437 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:50:03.807224 kubelet[2596]: E0813 00:50:03.807103 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:50:07.502871 systemd[1]: Started sshd@25-172.234.29.142:22-139.178.89.65:34104.service - OpenSSH per-connection server daemon (139.178.89.65:34104). Aug 13 00:50:07.806719 kubelet[2596]: E0813 00:50:07.806427 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:50:07.822522 sshd[4248]: Accepted publickey for core from 139.178.89.65 port 34104 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:50:07.824358 sshd-session[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:50:07.829510 systemd-logind[1461]: New session 26 of user core. Aug 13 00:50:07.841881 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 00:50:08.116412 sshd[4250]: Connection closed by 139.178.89.65 port 34104 Aug 13 00:50:08.117572 sshd-session[4248]: pam_unix(sshd:session): session closed for user core Aug 13 00:50:08.120694 systemd[1]: sshd@25-172.234.29.142:22-139.178.89.65:34104.service: Deactivated successfully. Aug 13 00:50:08.122716 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:50:08.124383 systemd-logind[1461]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:50:08.125466 systemd-logind[1461]: Removed session 26. Aug 13 00:50:09.807160 kubelet[2596]: E0813 00:50:09.807119 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:50:13.185873 systemd[1]: Started sshd@26-172.234.29.142:22-139.178.89.65:38370.service - OpenSSH per-connection server daemon (139.178.89.65:38370). Aug 13 00:50:13.516794 sshd[4264]: Accepted publickey for core from 139.178.89.65 port 38370 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:50:13.518386 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:50:13.523745 systemd-logind[1461]: New session 27 of user core. Aug 13 00:50:13.530821 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 00:50:13.556435 kubelet[2596]: I0813 00:50:13.556398 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:50:13.556435 kubelet[2596]: I0813 00:50:13.556440 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:50:13.558587 kubelet[2596]: I0813 00:50:13.558560 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:50:13.571462 kubelet[2596]: I0813 00:50:13.571437 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:50:13.571582 kubelet[2596]: I0813 00:50:13.571557 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:50:13.571649 kubelet[2596]: E0813 00:50:13.571597 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:50:13.571649 kubelet[2596]: E0813 00:50:13.571614 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:50:13.571649 kubelet[2596]: E0813 00:50:13.571627 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:50:13.571649 kubelet[2596]: E0813 00:50:13.571642 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:50:13.571788 kubelet[2596]: E0813 00:50:13.571655 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:50:13.571788 kubelet[2596]: E0813 00:50:13.571687 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:50:13.571788 kubelet[2596]: E0813 00:50:13.571700 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:50:13.571788 kubelet[2596]: E0813 00:50:13.571713 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:50:13.571788 kubelet[2596]: I0813 00:50:13.571728 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:50:13.819979 sshd[4266]: Connection closed by 139.178.89.65 port 38370 Aug 13 00:50:13.821175 sshd-session[4264]: pam_unix(sshd:session): session closed for user core Aug 13 00:50:13.824288 systemd-logind[1461]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:50:13.824890 systemd[1]: sshd@26-172.234.29.142:22-139.178.89.65:38370.service: Deactivated successfully. Aug 13 00:50:13.826776 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:50:13.827525 systemd-logind[1461]: Removed session 27. Aug 13 00:50:18.893963 systemd[1]: Started sshd@27-172.234.29.142:22-139.178.89.65:38386.service - OpenSSH per-connection server daemon (139.178.89.65:38386). Aug 13 00:50:19.221819 sshd[4278]: Accepted publickey for core from 139.178.89.65 port 38386 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:50:19.223764 sshd-session[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:50:19.229728 systemd-logind[1461]: New session 28 of user core. Aug 13 00:50:19.235889 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 00:50:19.529066 sshd[4280]: Connection closed by 139.178.89.65 port 38386 Aug 13 00:50:19.529952 sshd-session[4278]: pam_unix(sshd:session): session closed for user core Aug 13 00:50:19.535143 systemd[1]: sshd@27-172.234.29.142:22-139.178.89.65:38386.service: Deactivated successfully. Aug 13 00:50:19.537843 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 00:50:19.538857 systemd-logind[1461]: Session 28 logged out. Waiting for processes to exit. Aug 13 00:50:19.540040 systemd-logind[1461]: Removed session 28. Aug 13 00:50:19.806599 kubelet[2596]: E0813 00:50:19.806428 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:50:23.590434 kubelet[2596]: I0813 00:50:23.590392 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:50:23.590434 kubelet[2596]: I0813 00:50:23.590435 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:50:23.595360 kubelet[2596]: I0813 00:50:23.595337 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:50:23.606887 kubelet[2596]: I0813 00:50:23.606862 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:50:23.607009 kubelet[2596]: I0813 00:50:23.606989 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:50:23.607071 kubelet[2596]: E0813 00:50:23.607022 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:50:23.607071 kubelet[2596]: E0813 00:50:23.607034 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:50:23.607071 kubelet[2596]: E0813 00:50:23.607043 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:50:23.607071 kubelet[2596]: E0813 00:50:23.607051 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:50:23.607071 kubelet[2596]: E0813 00:50:23.607060 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:50:23.607071 kubelet[2596]: E0813 00:50:23.607069 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:50:23.607071 kubelet[2596]: E0813 00:50:23.607077 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:50:23.607422 kubelet[2596]: E0813 00:50:23.607085 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:50:23.607422 kubelet[2596]: I0813 00:50:23.607094 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:50:24.603794 systemd[1]: Started sshd@28-172.234.29.142:22-139.178.89.65:48078.service - OpenSSH per-connection server daemon (139.178.89.65:48078). Aug 13 00:50:24.936179 sshd[4293]: Accepted publickey for core from 139.178.89.65 port 48078 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:50:24.938172 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:50:24.943528 systemd-logind[1461]: New session 29 of user core. Aug 13 00:50:24.948820 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 13 00:50:25.240045 sshd[4295]: Connection closed by 139.178.89.65 port 48078 Aug 13 00:50:25.241028 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Aug 13 00:50:25.245584 systemd[1]: sshd@28-172.234.29.142:22-139.178.89.65:48078.service: Deactivated successfully. Aug 13 00:50:25.248422 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 00:50:25.249201 systemd-logind[1461]: Session 29 logged out. Waiting for processes to exit. Aug 13 00:50:25.250325 systemd-logind[1461]: Removed session 29. Aug 13 00:50:30.304093 systemd[1]: Started sshd@29-172.234.29.142:22-139.178.89.65:41944.service - OpenSSH per-connection server daemon (139.178.89.65:41944). Aug 13 00:50:30.634332 sshd[4307]: Accepted publickey for core from 139.178.89.65 port 41944 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:50:30.636011 sshd-session[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:50:30.641620 systemd-logind[1461]: New session 30 of user core. Aug 13 00:50:30.647820 systemd[1]: Started session-30.scope - Session 30 of User core. Aug 13 00:50:30.937584 sshd[4309]: Connection closed by 139.178.89.65 port 41944 Aug 13 00:50:30.939385 sshd-session[4307]: pam_unix(sshd:session): session closed for user core Aug 13 00:50:30.944414 systemd[1]: sshd@29-172.234.29.142:22-139.178.89.65:41944.service: Deactivated successfully. Aug 13 00:50:30.946949 systemd[1]: session-30.scope: Deactivated successfully. Aug 13 00:50:30.948226 systemd-logind[1461]: Session 30 logged out. Waiting for processes to exit. Aug 13 00:50:30.949342 systemd-logind[1461]: Removed session 30. Aug 13 00:50:33.626285 kubelet[2596]: I0813 00:50:33.626241 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:50:33.626285 kubelet[2596]: I0813 00:50:33.626287 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:50:33.628748 kubelet[2596]: I0813 00:50:33.628712 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:50:33.642039 kubelet[2596]: I0813 00:50:33.641753 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:50:33.642039 kubelet[2596]: I0813 00:50:33.641885 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:50:33.642039 kubelet[2596]: E0813 00:50:33.641927 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:50:33.642039 kubelet[2596]: E0813 00:50:33.641941 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:50:33.642039 kubelet[2596]: E0813 00:50:33.641951 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:50:33.642039 kubelet[2596]: E0813 00:50:33.641966 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:50:33.642039 kubelet[2596]: E0813 00:50:33.641975 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:50:33.642039 kubelet[2596]: E0813 00:50:33.641987 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:50:33.642039 kubelet[2596]: E0813 00:50:33.641996 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:50:33.642039 kubelet[2596]: E0813 00:50:33.642007 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:50:33.642039 kubelet[2596]: I0813 00:50:33.642020 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:50:36.007908 systemd[1]: Started sshd@30-172.234.29.142:22-139.178.89.65:41950.service - OpenSSH per-connection server daemon (139.178.89.65:41950). Aug 13 00:50:36.334013 sshd[4322]: Accepted publickey for core from 139.178.89.65 port 41950 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:50:36.335810 sshd-session[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:50:36.341654 systemd-logind[1461]: New session 31 of user core. Aug 13 00:50:36.348894 systemd[1]: Started session-31.scope - Session 31 of User core. Aug 13 00:50:36.636630 sshd[4324]: Connection closed by 139.178.89.65 port 41950 Aug 13 00:50:36.637792 sshd-session[4322]: pam_unix(sshd:session): session closed for user core Aug 13 00:50:36.642212 systemd[1]: sshd@30-172.234.29.142:22-139.178.89.65:41950.service: Deactivated successfully. Aug 13 00:50:36.645172 systemd[1]: session-31.scope: Deactivated successfully. Aug 13 00:50:36.646138 systemd-logind[1461]: Session 31 logged out. Waiting for processes to exit. Aug 13 00:50:36.647312 systemd-logind[1461]: Removed session 31. Aug 13 00:50:41.708008 systemd[1]: Started sshd@31-172.234.29.142:22-139.178.89.65:48914.service - OpenSSH per-connection server daemon (139.178.89.65:48914). Aug 13 00:50:42.056978 sshd[4336]: Accepted publickey for core from 139.178.89.65 port 48914 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:50:42.058165 sshd-session[4336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:50:42.069993 systemd-logind[1461]: New session 32 of user core. Aug 13 00:50:42.077816 systemd[1]: Started session-32.scope - Session 32 of User core. Aug 13 00:50:42.364719 sshd[4338]: Connection closed by 139.178.89.65 port 48914 Aug 13 00:50:42.365602 sshd-session[4336]: pam_unix(sshd:session): session closed for user core Aug 13 00:50:42.370097 systemd-logind[1461]: Session 32 logged out. Waiting for processes to exit. Aug 13 00:50:42.371214 systemd[1]: sshd@31-172.234.29.142:22-139.178.89.65:48914.service: Deactivated successfully. Aug 13 00:50:42.373706 systemd[1]: session-32.scope: Deactivated successfully. Aug 13 00:50:42.374814 systemd-logind[1461]: Removed session 32. Aug 13 00:50:43.662872 kubelet[2596]: I0813 00:50:43.662557 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:50:43.662872 kubelet[2596]: I0813 00:50:43.662876 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:50:43.666186 kubelet[2596]: I0813 00:50:43.665821 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:50:43.684684 kubelet[2596]: I0813 00:50:43.684637 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:50:43.685103 kubelet[2596]: I0813 00:50:43.685066 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:50:43.685157 kubelet[2596]: E0813 00:50:43.685119 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:50:43.685157 kubelet[2596]: E0813 00:50:43.685134 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:50:43.685157 kubelet[2596]: E0813 00:50:43.685143 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:50:43.685157 kubelet[2596]: E0813 00:50:43.685153 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:50:43.685247 kubelet[2596]: E0813 00:50:43.685162 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:50:43.685247 kubelet[2596]: E0813 00:50:43.685172 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:50:43.685247 kubelet[2596]: E0813 00:50:43.685180 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:50:43.685247 kubelet[2596]: E0813 00:50:43.685188 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:50:43.685247 kubelet[2596]: I0813 00:50:43.685197 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:50:47.435958 systemd[1]: Started sshd@32-172.234.29.142:22-139.178.89.65:48928.service - OpenSSH per-connection server daemon (139.178.89.65:48928). Aug 13 00:50:47.776543 sshd[4352]: Accepted publickey for core from 139.178.89.65 port 48928 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:50:47.778715 sshd-session[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:50:47.784276 systemd-logind[1461]: New session 33 of user core. Aug 13 00:50:47.789836 systemd[1]: Started session-33.scope - Session 33 of User core. Aug 13 00:50:48.085236 sshd[4354]: Connection closed by 139.178.89.65 port 48928 Aug 13 00:50:48.086163 sshd-session[4352]: pam_unix(sshd:session): session closed for user core Aug 13 00:50:48.090369 systemd[1]: sshd@32-172.234.29.142:22-139.178.89.65:48928.service: Deactivated successfully. Aug 13 00:50:48.093202 systemd[1]: session-33.scope: Deactivated successfully. Aug 13 00:50:48.093935 systemd-logind[1461]: Session 33 logged out. Waiting for processes to exit. Aug 13 00:50:48.094960 systemd-logind[1461]: Removed session 33. Aug 13 00:50:53.149886 systemd[1]: Started sshd@33-172.234.29.142:22-139.178.89.65:56092.service - OpenSSH per-connection server daemon (139.178.89.65:56092). Aug 13 00:50:53.476652 sshd[4368]: Accepted publickey for core from 139.178.89.65 port 56092 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:50:53.478317 sshd-session[4368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:50:53.484870 systemd-logind[1461]: New session 34 of user core. Aug 13 00:50:53.488825 systemd[1]: Started session-34.scope - Session 34 of User core. Aug 13 00:50:53.715132 kubelet[2596]: I0813 00:50:53.715091 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:50:53.716764 kubelet[2596]: I0813 00:50:53.715574 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:50:53.718589 kubelet[2596]: I0813 00:50:53.718576 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:50:53.733647 kubelet[2596]: I0813 00:50:53.733487 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:50:53.734209 kubelet[2596]: I0813 00:50:53.734080 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:50:53.734209 kubelet[2596]: E0813 00:50:53.734139 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:50:53.734209 kubelet[2596]: E0813 00:50:53.734162 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:50:53.734209 kubelet[2596]: E0813 00:50:53.734172 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:50:53.734209 kubelet[2596]: E0813 00:50:53.734181 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:50:53.734209 kubelet[2596]: E0813 00:50:53.734191 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:50:53.734565 kubelet[2596]: E0813 00:50:53.734497 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:50:53.734565 kubelet[2596]: E0813 00:50:53.734515 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:50:53.734565 kubelet[2596]: E0813 00:50:53.734524 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:50:53.734565 kubelet[2596]: I0813 00:50:53.734535 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:50:53.803884 sshd[4370]: Connection closed by 139.178.89.65 port 56092 Aug 13 00:50:53.805129 sshd-session[4368]: pam_unix(sshd:session): session closed for user core Aug 13 00:50:53.809883 systemd-logind[1461]: Session 34 logged out. Waiting for processes to exit. Aug 13 00:50:53.810876 systemd[1]: sshd@33-172.234.29.142:22-139.178.89.65:56092.service: Deactivated successfully. Aug 13 00:50:53.813121 systemd[1]: session-34.scope: Deactivated successfully. Aug 13 00:50:53.814005 systemd-logind[1461]: Removed session 34. Aug 13 00:50:58.869891 systemd[1]: Started sshd@34-172.234.29.142:22-139.178.89.65:56098.service - OpenSSH per-connection server daemon (139.178.89.65:56098). Aug 13 00:50:59.207412 sshd[4382]: Accepted publickey for core from 139.178.89.65 port 56098 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:50:59.209624 sshd-session[4382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:50:59.214591 systemd-logind[1461]: New session 35 of user core. Aug 13 00:50:59.216838 systemd[1]: Started session-35.scope - Session 35 of User core. Aug 13 00:50:59.517383 sshd[4384]: Connection closed by 139.178.89.65 port 56098 Aug 13 00:50:59.518358 sshd-session[4382]: pam_unix(sshd:session): session closed for user core Aug 13 00:50:59.522456 systemd[1]: sshd@34-172.234.29.142:22-139.178.89.65:56098.service: Deactivated successfully. Aug 13 00:50:59.525441 systemd[1]: session-35.scope: Deactivated successfully. Aug 13 00:50:59.527082 systemd-logind[1461]: Session 35 logged out. Waiting for processes to exit. Aug 13 00:50:59.528092 systemd-logind[1461]: Removed session 35. Aug 13 00:51:03.753619 kubelet[2596]: I0813 00:51:03.753572 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:51:03.753619 kubelet[2596]: I0813 00:51:03.753620 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:51:03.756319 kubelet[2596]: I0813 00:51:03.756246 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:51:03.772093 kubelet[2596]: I0813 00:51:03.772062 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:51:03.772189 kubelet[2596]: I0813 00:51:03.772176 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:51:03.772219 kubelet[2596]: E0813 00:51:03.772205 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:51:03.772219 kubelet[2596]: E0813 00:51:03.772216 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:51:03.772263 kubelet[2596]: E0813 00:51:03.772224 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:51:03.772263 kubelet[2596]: E0813 00:51:03.772233 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:51:03.772263 kubelet[2596]: E0813 00:51:03.772241 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:51:03.772263 kubelet[2596]: E0813 00:51:03.772249 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:51:03.772263 kubelet[2596]: E0813 00:51:03.772257 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:51:03.772263 kubelet[2596]: E0813 00:51:03.772265 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:51:03.772388 kubelet[2596]: I0813 00:51:03.772275 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:51:04.585877 systemd[1]: Started sshd@35-172.234.29.142:22-139.178.89.65:41190.service - OpenSSH per-connection server daemon (139.178.89.65:41190). Aug 13 00:51:04.916444 sshd[4396]: Accepted publickey for core from 139.178.89.65 port 41190 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:51:04.918066 sshd-session[4396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:51:04.924044 systemd-logind[1461]: New session 36 of user core. Aug 13 00:51:04.935933 systemd[1]: Started session-36.scope - Session 36 of User core. Aug 13 00:51:05.223926 sshd[4398]: Connection closed by 139.178.89.65 port 41190 Aug 13 00:51:05.224561 sshd-session[4396]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:05.229803 systemd[1]: sshd@35-172.234.29.142:22-139.178.89.65:41190.service: Deactivated successfully. Aug 13 00:51:05.232098 systemd[1]: session-36.scope: Deactivated successfully. Aug 13 00:51:05.233146 systemd-logind[1461]: Session 36 logged out. Waiting for processes to exit. Aug 13 00:51:05.234504 systemd-logind[1461]: Removed session 36. Aug 13 00:51:06.807250 kubelet[2596]: E0813 00:51:06.806406 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:51:09.806748 kubelet[2596]: E0813 00:51:09.806697 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:51:10.289931 systemd[1]: Started sshd@36-172.234.29.142:22-139.178.89.65:50380.service - OpenSSH per-connection server daemon (139.178.89.65:50380). Aug 13 00:51:10.610709 sshd[4410]: Accepted publickey for core from 139.178.89.65 port 50380 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:51:10.612083 sshd-session[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:51:10.616724 systemd-logind[1461]: New session 37 of user core. Aug 13 00:51:10.621795 systemd[1]: Started session-37.scope - Session 37 of User core. Aug 13 00:51:10.908331 sshd[4412]: Connection closed by 139.178.89.65 port 50380 Aug 13 00:51:10.909290 sshd-session[4410]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:10.916452 systemd[1]: sshd@36-172.234.29.142:22-139.178.89.65:50380.service: Deactivated successfully. Aug 13 00:51:10.918407 systemd[1]: session-37.scope: Deactivated successfully. Aug 13 00:51:10.919348 systemd-logind[1461]: Session 37 logged out. Waiting for processes to exit. Aug 13 00:51:10.920281 systemd-logind[1461]: Removed session 37. Aug 13 00:51:13.795745 kubelet[2596]: I0813 00:51:13.793614 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:51:13.795745 kubelet[2596]: I0813 00:51:13.793731 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:51:13.800132 kubelet[2596]: I0813 00:51:13.800098 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:51:13.815530 kubelet[2596]: I0813 00:51:13.815485 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:51:13.815732 kubelet[2596]: I0813 00:51:13.815601 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:51:13.815732 kubelet[2596]: E0813 00:51:13.815634 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:51:13.815732 kubelet[2596]: E0813 00:51:13.815649 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:51:13.815732 kubelet[2596]: E0813 00:51:13.815658 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:51:13.815732 kubelet[2596]: E0813 00:51:13.815688 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:51:13.815732 kubelet[2596]: E0813 00:51:13.815700 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:51:13.815732 kubelet[2596]: E0813 00:51:13.815709 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:51:13.815732 kubelet[2596]: E0813 00:51:13.815716 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:51:13.815732 kubelet[2596]: E0813 00:51:13.815725 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:51:13.815732 kubelet[2596]: I0813 00:51:13.815733 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:51:15.807451 kubelet[2596]: E0813 00:51:15.807412 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:51:15.979103 systemd[1]: Started sshd@37-172.234.29.142:22-139.178.89.65:50390.service - OpenSSH per-connection server daemon (139.178.89.65:50390). Aug 13 00:51:16.300707 sshd[4426]: Accepted publickey for core from 139.178.89.65 port 50390 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:51:16.302405 sshd-session[4426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:51:16.307591 systemd-logind[1461]: New session 38 of user core. Aug 13 00:51:16.312812 systemd[1]: Started session-38.scope - Session 38 of User core. Aug 13 00:51:16.613257 sshd[4428]: Connection closed by 139.178.89.65 port 50390 Aug 13 00:51:16.614433 sshd-session[4426]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:16.618665 systemd[1]: sshd@37-172.234.29.142:22-139.178.89.65:50390.service: Deactivated successfully. Aug 13 00:51:16.621163 systemd[1]: session-38.scope: Deactivated successfully. Aug 13 00:51:16.623884 systemd-logind[1461]: Session 38 logged out. Waiting for processes to exit. Aug 13 00:51:16.625220 systemd-logind[1461]: Removed session 38. Aug 13 00:51:21.682934 systemd[1]: Started sshd@38-172.234.29.142:22-139.178.89.65:56626.service - OpenSSH per-connection server daemon (139.178.89.65:56626). Aug 13 00:51:22.013714 sshd[4442]: Accepted publickey for core from 139.178.89.65 port 56626 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:51:22.015360 sshd-session[4442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:51:22.021191 systemd-logind[1461]: New session 39 of user core. Aug 13 00:51:22.028819 systemd[1]: Started session-39.scope - Session 39 of User core. Aug 13 00:51:22.320222 sshd[4444]: Connection closed by 139.178.89.65 port 56626 Aug 13 00:51:22.321067 sshd-session[4442]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:22.326144 systemd[1]: sshd@38-172.234.29.142:22-139.178.89.65:56626.service: Deactivated successfully. Aug 13 00:51:22.328998 systemd[1]: session-39.scope: Deactivated successfully. Aug 13 00:51:22.330011 systemd-logind[1461]: Session 39 logged out. Waiting for processes to exit. Aug 13 00:51:22.331534 systemd-logind[1461]: Removed session 39. Aug 13 00:51:23.806997 kubelet[2596]: E0813 00:51:23.806966 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:51:23.831004 kubelet[2596]: I0813 00:51:23.830961 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:51:23.831004 kubelet[2596]: I0813 00:51:23.830996 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:51:23.832716 kubelet[2596]: I0813 00:51:23.832701 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:51:23.843022 kubelet[2596]: I0813 00:51:23.842999 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:51:23.843149 kubelet[2596]: I0813 00:51:23.843128 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:51:23.843192 kubelet[2596]: E0813 00:51:23.843162 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:51:23.843192 kubelet[2596]: E0813 00:51:23.843174 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:51:23.843192 kubelet[2596]: E0813 00:51:23.843182 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:51:23.843192 kubelet[2596]: E0813 00:51:23.843191 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:51:23.843282 kubelet[2596]: E0813 00:51:23.843199 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:51:23.843282 kubelet[2596]: E0813 00:51:23.843206 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:51:23.843282 kubelet[2596]: E0813 00:51:23.843214 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:51:23.843282 kubelet[2596]: E0813 00:51:23.843222 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:51:23.843282 kubelet[2596]: I0813 00:51:23.843230 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:51:24.807871 kubelet[2596]: E0813 00:51:24.807104 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:51:27.391911 systemd[1]: Started sshd@39-172.234.29.142:22-139.178.89.65:56634.service - OpenSSH per-connection server daemon (139.178.89.65:56634). Aug 13 00:51:27.720090 sshd[4456]: Accepted publickey for core from 139.178.89.65 port 56634 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:51:27.721927 sshd-session[4456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:51:27.728168 systemd-logind[1461]: New session 40 of user core. Aug 13 00:51:27.734844 systemd[1]: Started session-40.scope - Session 40 of User core. Aug 13 00:51:27.807414 kubelet[2596]: E0813 00:51:27.807383 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:51:28.024659 sshd[4458]: Connection closed by 139.178.89.65 port 56634 Aug 13 00:51:28.025609 sshd-session[4456]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:28.030270 systemd-logind[1461]: Session 40 logged out. Waiting for processes to exit. Aug 13 00:51:28.031387 systemd[1]: sshd@39-172.234.29.142:22-139.178.89.65:56634.service: Deactivated successfully. Aug 13 00:51:28.034145 systemd[1]: session-40.scope: Deactivated successfully. Aug 13 00:51:28.035466 systemd-logind[1461]: Removed session 40. Aug 13 00:51:33.091919 systemd[1]: Started sshd@40-172.234.29.142:22-139.178.89.65:36484.service - OpenSSH per-connection server daemon (139.178.89.65:36484). Aug 13 00:51:33.408357 sshd[4470]: Accepted publickey for core from 139.178.89.65 port 36484 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:51:33.410268 sshd-session[4470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:51:33.417188 systemd-logind[1461]: New session 41 of user core. Aug 13 00:51:33.423812 systemd[1]: Started session-41.scope - Session 41 of User core. Aug 13 00:51:33.698466 sshd[4472]: Connection closed by 139.178.89.65 port 36484 Aug 13 00:51:33.699200 sshd-session[4470]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:33.703305 systemd[1]: sshd@40-172.234.29.142:22-139.178.89.65:36484.service: Deactivated successfully. Aug 13 00:51:33.705841 systemd[1]: session-41.scope: Deactivated successfully. Aug 13 00:51:33.707986 systemd-logind[1461]: Session 41 logged out. Waiting for processes to exit. Aug 13 00:51:33.709213 systemd-logind[1461]: Removed session 41. Aug 13 00:51:33.863793 kubelet[2596]: I0813 00:51:33.863752 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:51:33.863793 kubelet[2596]: I0813 00:51:33.863799 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:51:33.864937 kubelet[2596]: I0813 00:51:33.864916 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:51:33.874805 kubelet[2596]: I0813 00:51:33.874789 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:51:33.874948 kubelet[2596]: I0813 00:51:33.874929 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:51:33.874988 kubelet[2596]: E0813 00:51:33.874967 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:51:33.874988 kubelet[2596]: E0813 00:51:33.874980 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:51:33.874988 kubelet[2596]: E0813 00:51:33.874988 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:51:33.875069 kubelet[2596]: E0813 00:51:33.874997 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:51:33.875069 kubelet[2596]: E0813 00:51:33.875006 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:51:33.875069 kubelet[2596]: E0813 00:51:33.875014 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:51:33.875069 kubelet[2596]: E0813 00:51:33.875022 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:51:33.875069 kubelet[2596]: E0813 00:51:33.875029 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:51:33.875069 kubelet[2596]: I0813 00:51:33.875039 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:51:38.770932 systemd[1]: Started sshd@41-172.234.29.142:22-139.178.89.65:36496.service - OpenSSH per-connection server daemon (139.178.89.65:36496). Aug 13 00:51:38.810082 kubelet[2596]: E0813 00:51:38.809895 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:51:39.099903 sshd[4484]: Accepted publickey for core from 139.178.89.65 port 36496 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:51:39.101279 sshd-session[4484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:51:39.106052 systemd-logind[1461]: New session 42 of user core. Aug 13 00:51:39.115095 systemd[1]: Started session-42.scope - Session 42 of User core. Aug 13 00:51:39.400718 sshd[4486]: Connection closed by 139.178.89.65 port 36496 Aug 13 00:51:39.401843 sshd-session[4484]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:39.405593 systemd-logind[1461]: Session 42 logged out. Waiting for processes to exit. Aug 13 00:51:39.406492 systemd[1]: sshd@41-172.234.29.142:22-139.178.89.65:36496.service: Deactivated successfully. Aug 13 00:51:39.409243 systemd[1]: session-42.scope: Deactivated successfully. Aug 13 00:51:39.410279 systemd-logind[1461]: Removed session 42. Aug 13 00:51:43.893905 kubelet[2596]: I0813 00:51:43.893864 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:51:43.893905 kubelet[2596]: I0813 00:51:43.893915 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:51:43.896712 kubelet[2596]: I0813 00:51:43.895799 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:51:43.909071 kubelet[2596]: I0813 00:51:43.909044 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:51:43.909275 kubelet[2596]: I0813 00:51:43.909253 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:51:43.909326 kubelet[2596]: E0813 00:51:43.909303 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:51:43.909326 kubelet[2596]: E0813 00:51:43.909325 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:51:43.909373 kubelet[2596]: E0813 00:51:43.909337 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:51:43.909373 kubelet[2596]: E0813 00:51:43.909349 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:51:43.909373 kubelet[2596]: E0813 00:51:43.909359 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:51:43.909373 kubelet[2596]: E0813 00:51:43.909367 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:51:43.909373 kubelet[2596]: E0813 00:51:43.909375 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:51:43.909484 kubelet[2596]: E0813 00:51:43.909386 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:51:43.909484 kubelet[2596]: I0813 00:51:43.909397 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:51:44.466917 systemd[1]: Started sshd@42-172.234.29.142:22-139.178.89.65:44818.service - OpenSSH per-connection server daemon (139.178.89.65:44818). Aug 13 00:51:44.787222 sshd[4499]: Accepted publickey for core from 139.178.89.65 port 44818 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:51:44.789206 sshd-session[4499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:51:44.794120 systemd-logind[1461]: New session 43 of user core. Aug 13 00:51:44.801828 systemd[1]: Started session-43.scope - Session 43 of User core. Aug 13 00:51:45.101288 sshd[4501]: Connection closed by 139.178.89.65 port 44818 Aug 13 00:51:45.103745 sshd-session[4499]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:45.107007 systemd-logind[1461]: Session 43 logged out. Waiting for processes to exit. Aug 13 00:51:45.108499 systemd[1]: sshd@42-172.234.29.142:22-139.178.89.65:44818.service: Deactivated successfully. Aug 13 00:51:45.111529 systemd[1]: session-43.scope: Deactivated successfully. Aug 13 00:51:45.113856 systemd-logind[1461]: Removed session 43. Aug 13 00:51:45.806858 kubelet[2596]: E0813 00:51:45.806818 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:51:50.167867 systemd[1]: Started sshd@43-172.234.29.142:22-139.178.89.65:35724.service - OpenSSH per-connection server daemon (139.178.89.65:35724). Aug 13 00:51:50.491369 sshd[4515]: Accepted publickey for core from 139.178.89.65 port 35724 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:51:50.493020 sshd-session[4515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:51:50.498133 systemd-logind[1461]: New session 44 of user core. Aug 13 00:51:50.506812 systemd[1]: Started session-44.scope - Session 44 of User core. Aug 13 00:51:50.796720 sshd[4517]: Connection closed by 139.178.89.65 port 35724 Aug 13 00:51:50.797892 sshd-session[4515]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:50.802185 systemd[1]: sshd@43-172.234.29.142:22-139.178.89.65:35724.service: Deactivated successfully. Aug 13 00:51:50.805020 systemd[1]: session-44.scope: Deactivated successfully. Aug 13 00:51:50.805959 systemd-logind[1461]: Session 44 logged out. Waiting for processes to exit. Aug 13 00:51:50.808316 systemd-logind[1461]: Removed session 44. Aug 13 00:51:53.928309 kubelet[2596]: I0813 00:51:53.928271 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:51:53.928309 kubelet[2596]: I0813 00:51:53.928313 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:51:53.929814 kubelet[2596]: I0813 00:51:53.929799 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:51:53.939861 kubelet[2596]: I0813 00:51:53.939845 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:51:53.939998 kubelet[2596]: I0813 00:51:53.939978 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:51:53.940074 kubelet[2596]: E0813 00:51:53.940058 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:51:53.940074 kubelet[2596]: E0813 00:51:53.940075 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:51:53.940173 kubelet[2596]: E0813 00:51:53.940084 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:51:53.940173 kubelet[2596]: E0813 00:51:53.940093 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:51:53.940173 kubelet[2596]: E0813 00:51:53.940102 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:51:53.940173 kubelet[2596]: E0813 00:51:53.940111 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:51:53.940173 kubelet[2596]: E0813 00:51:53.940120 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:51:53.940173 kubelet[2596]: E0813 00:51:53.940137 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:51:53.940173 kubelet[2596]: I0813 00:51:53.940148 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:51:55.860709 systemd[1]: Started sshd@44-172.234.29.142:22-139.178.89.65:35736.service - OpenSSH per-connection server daemon (139.178.89.65:35736). Aug 13 00:51:56.202051 sshd[4530]: Accepted publickey for core from 139.178.89.65 port 35736 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:51:56.203467 sshd-session[4530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:51:56.207982 systemd-logind[1461]: New session 45 of user core. Aug 13 00:51:56.214217 systemd[1]: Started session-45.scope - Session 45 of User core. Aug 13 00:51:56.512208 sshd[4532]: Connection closed by 139.178.89.65 port 35736 Aug 13 00:51:56.513095 sshd-session[4530]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:56.516906 systemd-logind[1461]: Session 45 logged out. Waiting for processes to exit. Aug 13 00:51:56.517647 systemd[1]: sshd@44-172.234.29.142:22-139.178.89.65:35736.service: Deactivated successfully. Aug 13 00:51:56.520160 systemd[1]: session-45.scope: Deactivated successfully. Aug 13 00:51:56.521122 systemd-logind[1461]: Removed session 45. Aug 13 00:52:01.579918 systemd[1]: Started sshd@45-172.234.29.142:22-139.178.89.65:40724.service - OpenSSH per-connection server daemon (139.178.89.65:40724). Aug 13 00:52:01.914458 sshd[4544]: Accepted publickey for core from 139.178.89.65 port 40724 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:52:01.916260 sshd-session[4544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:01.922000 systemd-logind[1461]: New session 46 of user core. Aug 13 00:52:01.929853 systemd[1]: Started session-46.scope - Session 46 of User core. Aug 13 00:52:02.218354 sshd[4546]: Connection closed by 139.178.89.65 port 40724 Aug 13 00:52:02.219002 sshd-session[4544]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:02.225762 systemd-logind[1461]: Session 46 logged out. Waiting for processes to exit. Aug 13 00:52:02.226412 systemd[1]: sshd@45-172.234.29.142:22-139.178.89.65:40724.service: Deactivated successfully. Aug 13 00:52:02.229185 systemd[1]: session-46.scope: Deactivated successfully. Aug 13 00:52:02.230099 systemd-logind[1461]: Removed session 46. Aug 13 00:52:03.958187 kubelet[2596]: I0813 00:52:03.958144 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:52:03.958187 kubelet[2596]: I0813 00:52:03.958181 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:52:03.960284 kubelet[2596]: I0813 00:52:03.960263 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:52:03.972126 kubelet[2596]: I0813 00:52:03.972093 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:52:03.972260 kubelet[2596]: I0813 00:52:03.972226 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:52:03.972294 kubelet[2596]: E0813 00:52:03.972262 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:52:03.972294 kubelet[2596]: E0813 00:52:03.972276 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:52:03.972294 kubelet[2596]: E0813 00:52:03.972288 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:52:03.972366 kubelet[2596]: E0813 00:52:03.972298 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:52:03.972366 kubelet[2596]: E0813 00:52:03.972309 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:52:03.972366 kubelet[2596]: E0813 00:52:03.972317 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:52:03.972366 kubelet[2596]: E0813 00:52:03.972326 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:52:03.972366 kubelet[2596]: E0813 00:52:03.972334 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:52:03.972366 kubelet[2596]: I0813 00:52:03.972344 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:52:07.284894 systemd[1]: Started sshd@46-172.234.29.142:22-139.178.89.65:40726.service - OpenSSH per-connection server daemon (139.178.89.65:40726). Aug 13 00:52:07.619379 sshd[4558]: Accepted publickey for core from 139.178.89.65 port 40726 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:52:07.621230 sshd-session[4558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:07.626045 systemd-logind[1461]: New session 47 of user core. Aug 13 00:52:07.639794 systemd[1]: Started session-47.scope - Session 47 of User core. Aug 13 00:52:07.917141 sshd[4560]: Connection closed by 139.178.89.65 port 40726 Aug 13 00:52:07.918015 sshd-session[4558]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:07.921390 systemd[1]: sshd@46-172.234.29.142:22-139.178.89.65:40726.service: Deactivated successfully. Aug 13 00:52:07.923538 systemd[1]: session-47.scope: Deactivated successfully. Aug 13 00:52:07.925032 systemd-logind[1461]: Session 47 logged out. Waiting for processes to exit. Aug 13 00:52:07.926108 systemd-logind[1461]: Removed session 47. Aug 13 00:52:12.816604 kubelet[2596]: I0813 00:52:12.816505 2596 image_gc_manager.go:383] "Disk usage on image filesystem is over the high threshold, trying to free bytes down to the low threshold" usage=88 highThreshold=85 amountToFree=155953561 lowThreshold=80 Aug 13 00:52:12.816604 kubelet[2596]: E0813 00:52:12.816544 2596 kubelet.go:1551] "Image garbage collection failed multiple times in a row" err="Failed to garbage collect required amount of images. Attempted to free 155953561 bytes, but only found 0 bytes eligible to free." Aug 13 00:52:12.982894 systemd[1]: Started sshd@47-172.234.29.142:22-139.178.89.65:47164.service - OpenSSH per-connection server daemon (139.178.89.65:47164). Aug 13 00:52:13.308927 sshd[4574]: Accepted publickey for core from 139.178.89.65 port 47164 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:52:13.310389 sshd-session[4574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:13.315846 systemd-logind[1461]: New session 48 of user core. Aug 13 00:52:13.321800 systemd[1]: Started session-48.scope - Session 48 of User core. Aug 13 00:52:13.615415 sshd[4576]: Connection closed by 139.178.89.65 port 47164 Aug 13 00:52:13.616258 sshd-session[4574]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:13.620285 systemd[1]: sshd@47-172.234.29.142:22-139.178.89.65:47164.service: Deactivated successfully. Aug 13 00:52:13.622799 systemd[1]: session-48.scope: Deactivated successfully. Aug 13 00:52:13.624650 systemd-logind[1461]: Session 48 logged out. Waiting for processes to exit. Aug 13 00:52:13.625847 systemd-logind[1461]: Removed session 48. Aug 13 00:52:13.991179 kubelet[2596]: I0813 00:52:13.991122 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:52:13.991179 kubelet[2596]: I0813 00:52:13.991174 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:52:13.997801 kubelet[2596]: I0813 00:52:13.997719 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:52:14.011048 kubelet[2596]: I0813 00:52:14.011011 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:52:14.011206 kubelet[2596]: I0813 00:52:14.011147 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:52:14.011206 kubelet[2596]: E0813 00:52:14.011184 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:52:14.011206 kubelet[2596]: E0813 00:52:14.011200 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:52:14.011206 kubelet[2596]: E0813 00:52:14.011210 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:52:14.011367 kubelet[2596]: E0813 00:52:14.011221 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:52:14.011367 kubelet[2596]: E0813 00:52:14.011231 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:52:14.011367 kubelet[2596]: E0813 00:52:14.011241 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:52:14.011367 kubelet[2596]: E0813 00:52:14.011252 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:52:14.011367 kubelet[2596]: E0813 00:52:14.011261 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:52:14.011367 kubelet[2596]: I0813 00:52:14.011271 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:52:16.808013 kubelet[2596]: E0813 00:52:16.807100 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:52:17.807255 kubelet[2596]: E0813 00:52:17.807223 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:52:18.689874 systemd[1]: Started sshd@48-172.234.29.142:22-139.178.89.65:47178.service - OpenSSH per-connection server daemon (139.178.89.65:47178). Aug 13 00:52:19.030813 sshd[4588]: Accepted publickey for core from 139.178.89.65 port 47178 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:52:19.032098 sshd-session[4588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:19.037338 systemd-logind[1461]: New session 49 of user core. Aug 13 00:52:19.043796 systemd[1]: Started session-49.scope - Session 49 of User core. Aug 13 00:52:19.336993 sshd[4590]: Connection closed by 139.178.89.65 port 47178 Aug 13 00:52:19.337777 sshd-session[4588]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:19.341484 systemd[1]: sshd@48-172.234.29.142:22-139.178.89.65:47178.service: Deactivated successfully. Aug 13 00:52:19.343437 systemd[1]: session-49.scope: Deactivated successfully. Aug 13 00:52:19.344538 systemd-logind[1461]: Session 49 logged out. Waiting for processes to exit. Aug 13 00:52:19.345619 systemd-logind[1461]: Removed session 49. Aug 13 00:52:20.807648 kubelet[2596]: E0813 00:52:20.807248 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:52:24.031906 kubelet[2596]: I0813 00:52:24.031478 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:52:24.031906 kubelet[2596]: I0813 00:52:24.031519 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:52:24.033409 kubelet[2596]: I0813 00:52:24.033380 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:52:24.046021 kubelet[2596]: I0813 00:52:24.045988 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:52:24.046147 kubelet[2596]: I0813 00:52:24.046077 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:52:24.046147 kubelet[2596]: E0813 00:52:24.046106 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:52:24.046147 kubelet[2596]: E0813 00:52:24.046117 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:52:24.046147 kubelet[2596]: E0813 00:52:24.046126 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:52:24.046147 kubelet[2596]: E0813 00:52:24.046136 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:52:24.046147 kubelet[2596]: E0813 00:52:24.046144 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:52:24.046147 kubelet[2596]: E0813 00:52:24.046154 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:52:24.046147 kubelet[2596]: E0813 00:52:24.046164 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:52:24.046365 kubelet[2596]: E0813 00:52:24.046172 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:52:24.046365 kubelet[2596]: I0813 00:52:24.046183 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:52:24.412895 systemd[1]: Started sshd@49-172.234.29.142:22-139.178.89.65:35798.service - OpenSSH per-connection server daemon (139.178.89.65:35798). Aug 13 00:52:24.733693 sshd[4604]: Accepted publickey for core from 139.178.89.65 port 35798 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:52:24.735249 sshd-session[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:24.739375 systemd-logind[1461]: New session 50 of user core. Aug 13 00:52:24.742786 systemd[1]: Started session-50.scope - Session 50 of User core. Aug 13 00:52:25.032641 sshd[4606]: Connection closed by 139.178.89.65 port 35798 Aug 13 00:52:25.033555 sshd-session[4604]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:25.038511 systemd[1]: sshd@49-172.234.29.142:22-139.178.89.65:35798.service: Deactivated successfully. Aug 13 00:52:25.041289 systemd[1]: session-50.scope: Deactivated successfully. Aug 13 00:52:25.042247 systemd-logind[1461]: Session 50 logged out. Waiting for processes to exit. Aug 13 00:52:25.043568 systemd-logind[1461]: Removed session 50. Aug 13 00:52:30.101199 systemd[1]: Started sshd@50-172.234.29.142:22-139.178.89.65:60050.service - OpenSSH per-connection server daemon (139.178.89.65:60050). Aug 13 00:52:30.424512 sshd[4618]: Accepted publickey for core from 139.178.89.65 port 60050 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:52:30.426003 sshd-session[4618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:30.430166 systemd-logind[1461]: New session 51 of user core. Aug 13 00:52:30.434843 systemd[1]: Started session-51.scope - Session 51 of User core. Aug 13 00:52:30.729255 sshd[4620]: Connection closed by 139.178.89.65 port 60050 Aug 13 00:52:30.730337 sshd-session[4618]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:30.734780 systemd-logind[1461]: Session 51 logged out. Waiting for processes to exit. Aug 13 00:52:30.735854 systemd[1]: sshd@50-172.234.29.142:22-139.178.89.65:60050.service: Deactivated successfully. Aug 13 00:52:30.738590 systemd[1]: session-51.scope: Deactivated successfully. Aug 13 00:52:30.739839 systemd-logind[1461]: Removed session 51. Aug 13 00:52:34.066425 kubelet[2596]: I0813 00:52:34.066375 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:52:34.066425 kubelet[2596]: I0813 00:52:34.066427 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:52:34.068086 kubelet[2596]: I0813 00:52:34.068073 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:52:34.079036 kubelet[2596]: I0813 00:52:34.079019 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:52:34.079137 kubelet[2596]: I0813 00:52:34.079120 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:52:34.079166 kubelet[2596]: E0813 00:52:34.079157 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:52:34.079209 kubelet[2596]: E0813 00:52:34.079169 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:52:34.079209 kubelet[2596]: E0813 00:52:34.079178 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:52:34.079209 kubelet[2596]: E0813 00:52:34.079188 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:52:34.079209 kubelet[2596]: E0813 00:52:34.079196 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:52:34.079209 kubelet[2596]: E0813 00:52:34.079205 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:52:34.079310 kubelet[2596]: E0813 00:52:34.079214 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:52:34.079310 kubelet[2596]: E0813 00:52:34.079223 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:52:34.079310 kubelet[2596]: I0813 00:52:34.079232 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:52:35.794883 systemd[1]: Started sshd@51-172.234.29.142:22-139.178.89.65:60066.service - OpenSSH per-connection server daemon (139.178.89.65:60066). Aug 13 00:52:36.117093 sshd[4632]: Accepted publickey for core from 139.178.89.65 port 60066 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:52:36.118496 sshd-session[4632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:36.122733 systemd-logind[1461]: New session 52 of user core. Aug 13 00:52:36.129856 systemd[1]: Started session-52.scope - Session 52 of User core. Aug 13 00:52:36.411514 sshd[4634]: Connection closed by 139.178.89.65 port 60066 Aug 13 00:52:36.413771 sshd-session[4632]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:36.418301 systemd[1]: sshd@51-172.234.29.142:22-139.178.89.65:60066.service: Deactivated successfully. Aug 13 00:52:36.421107 systemd[1]: session-52.scope: Deactivated successfully. Aug 13 00:52:36.422000 systemd-logind[1461]: Session 52 logged out. Waiting for processes to exit. Aug 13 00:52:36.423040 systemd-logind[1461]: Removed session 52. Aug 13 00:52:39.807442 kubelet[2596]: E0813 00:52:39.807343 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:52:40.808011 kubelet[2596]: E0813 00:52:40.807136 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:52:41.479765 systemd[1]: Started sshd@52-172.234.29.142:22-139.178.89.65:55828.service - OpenSSH per-connection server daemon (139.178.89.65:55828). Aug 13 00:52:41.804596 sshd[4645]: Accepted publickey for core from 139.178.89.65 port 55828 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:52:41.806660 sshd-session[4645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:41.812322 systemd-logind[1461]: New session 53 of user core. Aug 13 00:52:41.821817 systemd[1]: Started session-53.scope - Session 53 of User core. Aug 13 00:52:42.104335 sshd[4647]: Connection closed by 139.178.89.65 port 55828 Aug 13 00:52:42.105268 sshd-session[4645]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:42.109732 systemd[1]: sshd@52-172.234.29.142:22-139.178.89.65:55828.service: Deactivated successfully. Aug 13 00:52:42.112613 systemd[1]: session-53.scope: Deactivated successfully. Aug 13 00:52:42.113774 systemd-logind[1461]: Session 53 logged out. Waiting for processes to exit. Aug 13 00:52:42.114941 systemd-logind[1461]: Removed session 53. Aug 13 00:52:42.169915 systemd[1]: Started sshd@53-172.234.29.142:22-139.178.89.65:55844.service - OpenSSH per-connection server daemon (139.178.89.65:55844). Aug 13 00:52:42.490068 sshd[4659]: Accepted publickey for core from 139.178.89.65 port 55844 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:52:42.492073 sshd-session[4659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:42.497431 systemd-logind[1461]: New session 54 of user core. Aug 13 00:52:42.500864 systemd[1]: Started session-54.scope - Session 54 of User core. Aug 13 00:52:43.961400 kubelet[2596]: I0813 00:52:43.961307 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cnmtx" podStartSLOduration=326.961286475 podStartE2EDuration="5m26.961286475s" podCreationTimestamp="2025-08-13 00:47:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:47:35.934102334 +0000 UTC m=+23.236687421" watchObservedRunningTime="2025-08-13 00:52:43.961286475 +0000 UTC m=+331.263871552" Aug 13 00:52:43.970705 containerd[1485]: time="2025-08-13T00:52:43.969849839Z" level=info msg="StopContainer for \"cdde5f52978612dffe3ffe1e07fb61865d356d997a8e4c964167f8fd44ed64a6\" with timeout 30 (s)" Aug 13 00:52:43.972045 containerd[1485]: time="2025-08-13T00:52:43.970766152Z" level=info msg="Stop container \"cdde5f52978612dffe3ffe1e07fb61865d356d997a8e4c964167f8fd44ed64a6\" with signal terminated" Aug 13 00:52:43.997740 systemd[1]: run-containerd-runc-k8s.io-8012bcae6988aa8eba322d56fe332b2c0a9c5dd20f7b6b24d232034d4cec2f37-runc.0nQagL.mount: Deactivated successfully. Aug 13 00:52:43.999491 systemd[1]: cri-containerd-cdde5f52978612dffe3ffe1e07fb61865d356d997a8e4c964167f8fd44ed64a6.scope: Deactivated successfully. Aug 13 00:52:44.015976 containerd[1485]: time="2025-08-13T00:52:44.015915796Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:52:44.028231 containerd[1485]: time="2025-08-13T00:52:44.028061247Z" level=info msg="StopContainer for \"8012bcae6988aa8eba322d56fe332b2c0a9c5dd20f7b6b24d232034d4cec2f37\" with timeout 2 (s)" Aug 13 00:52:44.028971 containerd[1485]: time="2025-08-13T00:52:44.028742921Z" level=info msg="Stop container \"8012bcae6988aa8eba322d56fe332b2c0a9c5dd20f7b6b24d232034d4cec2f37\" with signal terminated" Aug 13 00:52:44.029482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cdde5f52978612dffe3ffe1e07fb61865d356d997a8e4c964167f8fd44ed64a6-rootfs.mount: Deactivated successfully. Aug 13 00:52:44.034326 containerd[1485]: time="2025-08-13T00:52:44.034021999Z" level=info msg="shim disconnected" id=cdde5f52978612dffe3ffe1e07fb61865d356d997a8e4c964167f8fd44ed64a6 namespace=k8s.io Aug 13 00:52:44.034326 containerd[1485]: time="2025-08-13T00:52:44.034083994Z" level=warning msg="cleaning up after shim disconnected" id=cdde5f52978612dffe3ffe1e07fb61865d356d997a8e4c964167f8fd44ed64a6 namespace=k8s.io Aug 13 00:52:44.034326 containerd[1485]: time="2025-08-13T00:52:44.034093494Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:52:44.040000 systemd-networkd[1389]: lxc_health: Link DOWN Aug 13 00:52:44.040010 systemd-networkd[1389]: lxc_health: Lost carrier Aug 13 00:52:44.061967 systemd[1]: cri-containerd-8012bcae6988aa8eba322d56fe332b2c0a9c5dd20f7b6b24d232034d4cec2f37.scope: Deactivated successfully. Aug 13 00:52:44.062859 systemd[1]: cri-containerd-8012bcae6988aa8eba322d56fe332b2c0a9c5dd20f7b6b24d232034d4cec2f37.scope: Consumed 7.211s CPU time, 123.7M memory peak, 136K read from disk, 13.3M written to disk. Aug 13 00:52:44.068378 containerd[1485]: time="2025-08-13T00:52:44.068141108Z" level=info msg="StopContainer for \"cdde5f52978612dffe3ffe1e07fb61865d356d997a8e4c964167f8fd44ed64a6\" returns successfully" Aug 13 00:52:44.071712 containerd[1485]: time="2025-08-13T00:52:44.069451851Z" level=info msg="StopPodSandbox for \"9f29907847049d37f0ecf87b8cf02ffe4497525b53a378fc53d019dc93cc9684\"" Aug 13 00:52:44.071712 containerd[1485]: time="2025-08-13T00:52:44.069497835Z" level=info msg="Container to stop \"cdde5f52978612dffe3ffe1e07fb61865d356d997a8e4c964167f8fd44ed64a6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:52:44.072175 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9f29907847049d37f0ecf87b8cf02ffe4497525b53a378fc53d019dc93cc9684-shm.mount: Deactivated successfully. Aug 13 00:52:44.087197 systemd[1]: cri-containerd-9f29907847049d37f0ecf87b8cf02ffe4497525b53a378fc53d019dc93cc9684.scope: Deactivated successfully. Aug 13 00:52:44.109649 kubelet[2596]: I0813 00:52:44.109432 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:52:44.109649 kubelet[2596]: I0813 00:52:44.109480 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:52:44.111850 kubelet[2596]: I0813 00:52:44.111738 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:52:44.118049 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8012bcae6988aa8eba322d56fe332b2c0a9c5dd20f7b6b24d232034d4cec2f37-rootfs.mount: Deactivated successfully. Aug 13 00:52:44.137845 containerd[1485]: time="2025-08-13T00:52:44.137168908Z" level=info msg="shim disconnected" id=8012bcae6988aa8eba322d56fe332b2c0a9c5dd20f7b6b24d232034d4cec2f37 namespace=k8s.io Aug 13 00:52:44.137845 containerd[1485]: time="2025-08-13T00:52:44.137522886Z" level=warning msg="cleaning up after shim disconnected" id=8012bcae6988aa8eba322d56fe332b2c0a9c5dd20f7b6b24d232034d4cec2f37 namespace=k8s.io Aug 13 00:52:44.137845 containerd[1485]: time="2025-08-13T00:52:44.137533467Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:52:44.139313 kubelet[2596]: I0813 00:52:44.139018 2596 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:52:44.139313 kubelet[2596]: I0813 00:52:44.139149 2596 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gnhj4","kube-system/coredns-668d6bf9bc-ksr2d","kube-system/coredns-668d6bf9bc-cnmtx","kube-system/cilium-8pgbq","kube-system/kube-controller-manager-172-234-29-142","kube-system/kube-proxy-g2zwh","kube-system/kube-apiserver-172-234-29-142","kube-system/kube-scheduler-172-234-29-142"] Aug 13 00:52:44.139313 kubelet[2596]: E0813 00:52:44.139179 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gnhj4" Aug 13 00:52:44.139313 kubelet[2596]: E0813 00:52:44.139211 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-ksr2d" Aug 13 00:52:44.139313 kubelet[2596]: E0813 00:52:44.139222 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-cnmtx" Aug 13 00:52:44.139313 kubelet[2596]: E0813 00:52:44.139234 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-8pgbq" Aug 13 00:52:44.139313 kubelet[2596]: E0813 00:52:44.139244 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-142" Aug 13 00:52:44.139313 kubelet[2596]: E0813 00:52:44.139253 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-g2zwh" Aug 13 00:52:44.139313 kubelet[2596]: E0813 00:52:44.139262 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-142" Aug 13 00:52:44.139313 kubelet[2596]: E0813 00:52:44.139290 2596 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-142" Aug 13 00:52:44.139313 kubelet[2596]: I0813 00:52:44.139301 2596 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:52:44.148701 containerd[1485]: time="2025-08-13T00:52:44.148208282Z" level=info msg="shim disconnected" id=9f29907847049d37f0ecf87b8cf02ffe4497525b53a378fc53d019dc93cc9684 namespace=k8s.io Aug 13 00:52:44.148701 containerd[1485]: time="2025-08-13T00:52:44.148247975Z" level=warning msg="cleaning up after shim disconnected" id=9f29907847049d37f0ecf87b8cf02ffe4497525b53a378fc53d019dc93cc9684 namespace=k8s.io Aug 13 00:52:44.148701 containerd[1485]: time="2025-08-13T00:52:44.148256755Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:52:44.163086 containerd[1485]: time="2025-08-13T00:52:44.162786175Z" level=info msg="StopContainer for \"8012bcae6988aa8eba322d56fe332b2c0a9c5dd20f7b6b24d232034d4cec2f37\" returns successfully" Aug 13 00:52:44.164751 containerd[1485]: time="2025-08-13T00:52:44.163214599Z" level=info msg="StopPodSandbox for \"c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97\"" Aug 13 00:52:44.164751 containerd[1485]: time="2025-08-13T00:52:44.163248341Z" level=info msg="Container to stop \"32da1e52d2490f2c22fbf746ea77f1a6d15549eb1ac4083231fc43750fd78fa1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:52:44.164751 containerd[1485]: time="2025-08-13T00:52:44.163284304Z" level=info msg="Container to stop \"8012bcae6988aa8eba322d56fe332b2c0a9c5dd20f7b6b24d232034d4cec2f37\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:52:44.164751 containerd[1485]: time="2025-08-13T00:52:44.163292655Z" level=info msg="Container to stop \"459c464e4a97d7599f95f7a05e7cbdab4e1a6daa3318b768c6b8eff1b3028bc6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:52:44.164751 containerd[1485]: time="2025-08-13T00:52:44.163302396Z" level=info msg="Container to stop \"6ca32e6ac460353813386cf1864695e4d9d04a6b4934bcfe2455e6e536733ceb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:52:44.164751 containerd[1485]: time="2025-08-13T00:52:44.163310806Z" level=info msg="Container to stop \"9aa7d61d500c1a12383e3332b21acfa11fb111f34811635639610258e8751c68\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:52:44.172442 containerd[1485]: time="2025-08-13T00:52:44.172420327Z" level=info msg="TearDown network for sandbox \"9f29907847049d37f0ecf87b8cf02ffe4497525b53a378fc53d019dc93cc9684\" successfully" Aug 13 00:52:44.172541 containerd[1485]: time="2025-08-13T00:52:44.172526885Z" level=info msg="StopPodSandbox for \"9f29907847049d37f0ecf87b8cf02ffe4497525b53a378fc53d019dc93cc9684\" returns successfully" Aug 13 00:52:44.173247 systemd[1]: cri-containerd-c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97.scope: Deactivated successfully. Aug 13 00:52:44.184857 kubelet[2596]: I0813 00:52:44.184836 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/deb77b33-8127-46b4-834c-dc204d34fbcd-cilium-config-path\") pod \"deb77b33-8127-46b4-834c-dc204d34fbcd\" (UID: \"deb77b33-8127-46b4-834c-dc204d34fbcd\") " Aug 13 00:52:44.188793 kubelet[2596]: I0813 00:52:44.188486 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg5zr\" (UniqueName: \"kubernetes.io/projected/deb77b33-8127-46b4-834c-dc204d34fbcd-kube-api-access-xg5zr\") pod \"deb77b33-8127-46b4-834c-dc204d34fbcd\" (UID: \"deb77b33-8127-46b4-834c-dc204d34fbcd\") " Aug 13 00:52:44.194930 kubelet[2596]: I0813 00:52:44.194905 2596 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deb77b33-8127-46b4-834c-dc204d34fbcd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "deb77b33-8127-46b4-834c-dc204d34fbcd" (UID: "deb77b33-8127-46b4-834c-dc204d34fbcd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:52:44.196057 kubelet[2596]: I0813 00:52:44.196038 2596 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deb77b33-8127-46b4-834c-dc204d34fbcd-kube-api-access-xg5zr" (OuterVolumeSpecName: "kube-api-access-xg5zr") pod "deb77b33-8127-46b4-834c-dc204d34fbcd" (UID: "deb77b33-8127-46b4-834c-dc204d34fbcd"). InnerVolumeSpecName "kube-api-access-xg5zr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:52:44.206603 containerd[1485]: time="2025-08-13T00:52:44.206399015Z" level=info msg="shim disconnected" id=c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97 namespace=k8s.io Aug 13 00:52:44.206603 containerd[1485]: time="2025-08-13T00:52:44.206472831Z" level=warning msg="cleaning up after shim disconnected" id=c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97 namespace=k8s.io Aug 13 00:52:44.206603 containerd[1485]: time="2025-08-13T00:52:44.206482211Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:52:44.221094 containerd[1485]: time="2025-08-13T00:52:44.221008191Z" level=info msg="TearDown network for sandbox \"c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97\" successfully" Aug 13 00:52:44.221094 containerd[1485]: time="2025-08-13T00:52:44.221040793Z" level=info msg="StopPodSandbox for \"c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97\" returns successfully" Aug 13 00:52:44.289717 kubelet[2596]: I0813 00:52:44.289224 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-host-proc-sys-kernel\") pod \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " Aug 13 00:52:44.289717 kubelet[2596]: I0813 00:52:44.289271 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-cilium-run\") pod \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " Aug 13 00:52:44.289717 kubelet[2596]: I0813 00:52:44.289294 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp5gb\" (UniqueName: \"kubernetes.io/projected/5ed7fd22-d5dc-4877-8b35-3a97e246932f-kube-api-access-qp5gb\") pod \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " Aug 13 00:52:44.289717 kubelet[2596]: I0813 00:52:44.289308 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-cilium-cgroup\") pod \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " Aug 13 00:52:44.289717 kubelet[2596]: I0813 00:52:44.289322 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-lib-modules\") pod \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " Aug 13 00:52:44.289717 kubelet[2596]: I0813 00:52:44.289335 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-bpf-maps\") pod \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " Aug 13 00:52:44.290072 kubelet[2596]: I0813 00:52:44.289353 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ed7fd22-d5dc-4877-8b35-3a97e246932f-clustermesh-secrets\") pod \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " Aug 13 00:52:44.290072 kubelet[2596]: I0813 00:52:44.289369 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-cni-path\") pod \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " Aug 13 00:52:44.290072 kubelet[2596]: I0813 00:52:44.289384 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ed7fd22-d5dc-4877-8b35-3a97e246932f-hubble-tls\") pod \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " Aug 13 00:52:44.290072 kubelet[2596]: I0813 00:52:44.289398 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-host-proc-sys-net\") pod \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " Aug 13 00:52:44.290072 kubelet[2596]: I0813 00:52:44.289411 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-etc-cni-netd\") pod \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " Aug 13 00:52:44.290072 kubelet[2596]: I0813 00:52:44.289425 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-hostproc\") pod \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " Aug 13 00:52:44.290213 kubelet[2596]: I0813 00:52:44.289443 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ed7fd22-d5dc-4877-8b35-3a97e246932f-cilium-config-path\") pod \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " Aug 13 00:52:44.290213 kubelet[2596]: I0813 00:52:44.289458 2596 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-xtables-lock\") pod \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\" (UID: \"5ed7fd22-d5dc-4877-8b35-3a97e246932f\") " Aug 13 00:52:44.290213 kubelet[2596]: I0813 00:52:44.289489 2596 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/deb77b33-8127-46b4-834c-dc204d34fbcd-cilium-config-path\") on node \"172-234-29-142\" DevicePath \"\"" Aug 13 00:52:44.290213 kubelet[2596]: I0813 00:52:44.289500 2596 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xg5zr\" (UniqueName: \"kubernetes.io/projected/deb77b33-8127-46b4-834c-dc204d34fbcd-kube-api-access-xg5zr\") on node \"172-234-29-142\" DevicePath \"\"" Aug 13 00:52:44.290213 kubelet[2596]: I0813 00:52:44.289555 2596 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5ed7fd22-d5dc-4877-8b35-3a97e246932f" (UID: "5ed7fd22-d5dc-4877-8b35-3a97e246932f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:52:44.290213 kubelet[2596]: I0813 00:52:44.289586 2596 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5ed7fd22-d5dc-4877-8b35-3a97e246932f" (UID: "5ed7fd22-d5dc-4877-8b35-3a97e246932f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:52:44.290343 kubelet[2596]: I0813 00:52:44.289614 2596 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5ed7fd22-d5dc-4877-8b35-3a97e246932f" (UID: "5ed7fd22-d5dc-4877-8b35-3a97e246932f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:52:44.290343 kubelet[2596]: I0813 00:52:44.289987 2596 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-cni-path" (OuterVolumeSpecName: "cni-path") pod "5ed7fd22-d5dc-4877-8b35-3a97e246932f" (UID: "5ed7fd22-d5dc-4877-8b35-3a97e246932f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:52:44.290343 kubelet[2596]: I0813 00:52:44.290010 2596 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5ed7fd22-d5dc-4877-8b35-3a97e246932f" (UID: "5ed7fd22-d5dc-4877-8b35-3a97e246932f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:52:44.290343 kubelet[2596]: I0813 00:52:44.290026 2596 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5ed7fd22-d5dc-4877-8b35-3a97e246932f" (UID: "5ed7fd22-d5dc-4877-8b35-3a97e246932f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:52:44.290343 kubelet[2596]: I0813 00:52:44.290038 2596 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5ed7fd22-d5dc-4877-8b35-3a97e246932f" (UID: "5ed7fd22-d5dc-4877-8b35-3a97e246932f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:52:44.294717 kubelet[2596]: I0813 00:52:44.292757 2596 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5ed7fd22-d5dc-4877-8b35-3a97e246932f" (UID: "5ed7fd22-d5dc-4877-8b35-3a97e246932f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:52:44.298245 kubelet[2596]: I0813 00:52:44.298122 2596 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5ed7fd22-d5dc-4877-8b35-3a97e246932f" (UID: "5ed7fd22-d5dc-4877-8b35-3a97e246932f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:52:44.298245 kubelet[2596]: I0813 00:52:44.298227 2596 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ed7fd22-d5dc-4877-8b35-3a97e246932f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5ed7fd22-d5dc-4877-8b35-3a97e246932f" (UID: "5ed7fd22-d5dc-4877-8b35-3a97e246932f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:52:44.298350 kubelet[2596]: I0813 00:52:44.298256 2596 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-hostproc" (OuterVolumeSpecName: "hostproc") pod "5ed7fd22-d5dc-4877-8b35-3a97e246932f" (UID: "5ed7fd22-d5dc-4877-8b35-3a97e246932f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:52:44.300858 kubelet[2596]: I0813 00:52:44.300826 2596 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ed7fd22-d5dc-4877-8b35-3a97e246932f-kube-api-access-qp5gb" (OuterVolumeSpecName: "kube-api-access-qp5gb") pod "5ed7fd22-d5dc-4877-8b35-3a97e246932f" (UID: "5ed7fd22-d5dc-4877-8b35-3a97e246932f"). InnerVolumeSpecName "kube-api-access-qp5gb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:52:44.301801 kubelet[2596]: I0813 00:52:44.301777 2596 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ed7fd22-d5dc-4877-8b35-3a97e246932f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5ed7fd22-d5dc-4877-8b35-3a97e246932f" (UID: "5ed7fd22-d5dc-4877-8b35-3a97e246932f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:52:44.302833 kubelet[2596]: I0813 00:52:44.302769 2596 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ed7fd22-d5dc-4877-8b35-3a97e246932f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5ed7fd22-d5dc-4877-8b35-3a97e246932f" (UID: "5ed7fd22-d5dc-4877-8b35-3a97e246932f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:52:44.389942 kubelet[2596]: I0813 00:52:44.389895 2596 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-etc-cni-netd\") on node \"172-234-29-142\" DevicePath \"\"" Aug 13 00:52:44.389942 kubelet[2596]: I0813 00:52:44.389928 2596 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-hostproc\") on node \"172-234-29-142\" DevicePath \"\"" Aug 13 00:52:44.389942 kubelet[2596]: I0813 00:52:44.389943 2596 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ed7fd22-d5dc-4877-8b35-3a97e246932f-cilium-config-path\") on node \"172-234-29-142\" DevicePath \"\"" Aug 13 00:52:44.390065 kubelet[2596]: I0813 00:52:44.389957 2596 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-host-proc-sys-net\") on node \"172-234-29-142\" DevicePath \"\"" Aug 13 00:52:44.390065 kubelet[2596]: I0813 00:52:44.389968 2596 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-xtables-lock\") on node \"172-234-29-142\" DevicePath \"\"" Aug 13 00:52:44.390065 kubelet[2596]: I0813 00:52:44.389979 2596 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-host-proc-sys-kernel\") on node \"172-234-29-142\" DevicePath \"\"" Aug 13 00:52:44.390065 kubelet[2596]: I0813 00:52:44.389989 2596 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-cilium-run\") on node \"172-234-29-142\" DevicePath \"\"" Aug 13 00:52:44.390065 kubelet[2596]: I0813 00:52:44.389996 2596 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-cilium-cgroup\") on node \"172-234-29-142\" DevicePath \"\"" Aug 13 00:52:44.390065 kubelet[2596]: I0813 00:52:44.390004 2596 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-lib-modules\") on node \"172-234-29-142\" DevicePath \"\"" Aug 13 00:52:44.390065 kubelet[2596]: I0813 00:52:44.390012 2596 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-bpf-maps\") on node \"172-234-29-142\" DevicePath \"\"" Aug 13 00:52:44.390065 kubelet[2596]: I0813 00:52:44.390023 2596 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qp5gb\" (UniqueName: \"kubernetes.io/projected/5ed7fd22-d5dc-4877-8b35-3a97e246932f-kube-api-access-qp5gb\") on node \"172-234-29-142\" DevicePath \"\"" Aug 13 00:52:44.390245 kubelet[2596]: I0813 00:52:44.390031 2596 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ed7fd22-d5dc-4877-8b35-3a97e246932f-clustermesh-secrets\") on node \"172-234-29-142\" DevicePath \"\"" Aug 13 00:52:44.390245 kubelet[2596]: I0813 00:52:44.390040 2596 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ed7fd22-d5dc-4877-8b35-3a97e246932f-cni-path\") on node \"172-234-29-142\" DevicePath \"\"" Aug 13 00:52:44.390245 kubelet[2596]: I0813 00:52:44.390050 2596 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ed7fd22-d5dc-4877-8b35-3a97e246932f-hubble-tls\") on node \"172-234-29-142\" DevicePath \"\"" Aug 13 00:52:44.513322 kubelet[2596]: I0813 00:52:44.511307 2596 scope.go:117] "RemoveContainer" containerID="8012bcae6988aa8eba322d56fe332b2c0a9c5dd20f7b6b24d232034d4cec2f37" Aug 13 00:52:44.517013 containerd[1485]: time="2025-08-13T00:52:44.516960502Z" level=info msg="RemoveContainer for \"8012bcae6988aa8eba322d56fe332b2c0a9c5dd20f7b6b24d232034d4cec2f37\"" Aug 13 00:52:44.522536 containerd[1485]: time="2025-08-13T00:52:44.522263232Z" level=info msg="RemoveContainer for \"8012bcae6988aa8eba322d56fe332b2c0a9c5dd20f7b6b24d232034d4cec2f37\" returns successfully" Aug 13 00:52:44.523061 systemd[1]: Removed slice kubepods-burstable-pod5ed7fd22_d5dc_4877_8b35_3a97e246932f.slice - libcontainer container kubepods-burstable-pod5ed7fd22_d5dc_4877_8b35_3a97e246932f.slice. Aug 13 00:52:44.523415 systemd[1]: kubepods-burstable-pod5ed7fd22_d5dc_4877_8b35_3a97e246932f.slice: Consumed 7.310s CPU time, 124.1M memory peak, 136K read from disk, 13.3M written to disk. Aug 13 00:52:44.525649 kubelet[2596]: I0813 00:52:44.525603 2596 scope.go:117] "RemoveContainer" containerID="32da1e52d2490f2c22fbf746ea77f1a6d15549eb1ac4083231fc43750fd78fa1" Aug 13 00:52:44.527127 containerd[1485]: time="2025-08-13T00:52:44.526615426Z" level=info msg="RemoveContainer for \"32da1e52d2490f2c22fbf746ea77f1a6d15549eb1ac4083231fc43750fd78fa1\"" Aug 13 00:52:44.528556 systemd[1]: Removed slice kubepods-besteffort-poddeb77b33_8127_46b4_834c_dc204d34fbcd.slice - libcontainer container kubepods-besteffort-poddeb77b33_8127_46b4_834c_dc204d34fbcd.slice. Aug 13 00:52:44.529424 containerd[1485]: time="2025-08-13T00:52:44.529361653Z" level=info msg="RemoveContainer for \"32da1e52d2490f2c22fbf746ea77f1a6d15549eb1ac4083231fc43750fd78fa1\" returns successfully" Aug 13 00:52:44.529651 kubelet[2596]: I0813 00:52:44.529556 2596 scope.go:117] "RemoveContainer" containerID="9aa7d61d500c1a12383e3332b21acfa11fb111f34811635639610258e8751c68" Aug 13 00:52:44.531644 containerd[1485]: time="2025-08-13T00:52:44.531062628Z" level=info msg="RemoveContainer for \"9aa7d61d500c1a12383e3332b21acfa11fb111f34811635639610258e8751c68\"" Aug 13 00:52:44.535035 containerd[1485]: time="2025-08-13T00:52:44.535005670Z" level=info msg="RemoveContainer for \"9aa7d61d500c1a12383e3332b21acfa11fb111f34811635639610258e8751c68\" returns successfully" Aug 13 00:52:44.535217 kubelet[2596]: I0813 00:52:44.535179 2596 scope.go:117] "RemoveContainer" containerID="6ca32e6ac460353813386cf1864695e4d9d04a6b4934bcfe2455e6e536733ceb" Aug 13 00:52:44.536001 containerd[1485]: time="2025-08-13T00:52:44.535965656Z" level=info msg="RemoveContainer for \"6ca32e6ac460353813386cf1864695e4d9d04a6b4934bcfe2455e6e536733ceb\"" Aug 13 00:52:44.541274 containerd[1485]: time="2025-08-13T00:52:44.541185218Z" level=info msg="RemoveContainer for \"6ca32e6ac460353813386cf1864695e4d9d04a6b4934bcfe2455e6e536733ceb\" returns successfully" Aug 13 00:52:44.541395 kubelet[2596]: I0813 00:52:44.541370 2596 scope.go:117] "RemoveContainer" containerID="459c464e4a97d7599f95f7a05e7cbdab4e1a6daa3318b768c6b8eff1b3028bc6" Aug 13 00:52:44.543057 containerd[1485]: time="2025-08-13T00:52:44.543015383Z" level=info msg="RemoveContainer for \"459c464e4a97d7599f95f7a05e7cbdab4e1a6daa3318b768c6b8eff1b3028bc6\"" Aug 13 00:52:44.549265 containerd[1485]: time="2025-08-13T00:52:44.549215234Z" level=info msg="RemoveContainer for \"459c464e4a97d7599f95f7a05e7cbdab4e1a6daa3318b768c6b8eff1b3028bc6\" returns successfully" Aug 13 00:52:44.550235 kubelet[2596]: I0813 00:52:44.550214 2596 scope.go:117] "RemoveContainer" containerID="8012bcae6988aa8eba322d56fe332b2c0a9c5dd20f7b6b24d232034d4cec2f37" Aug 13 00:52:44.551317 containerd[1485]: time="2025-08-13T00:52:44.551245524Z" level=error msg="ContainerStatus for \"8012bcae6988aa8eba322d56fe332b2c0a9c5dd20f7b6b24d232034d4cec2f37\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8012bcae6988aa8eba322d56fe332b2c0a9c5dd20f7b6b24d232034d4cec2f37\": not found" Aug 13 00:52:44.551492 kubelet[2596]: E0813 00:52:44.551418 2596 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8012bcae6988aa8eba322d56fe332b2c0a9c5dd20f7b6b24d232034d4cec2f37\": not found" containerID="8012bcae6988aa8eba322d56fe332b2c0a9c5dd20f7b6b24d232034d4cec2f37" Aug 13 00:52:44.551816 kubelet[2596]: I0813 00:52:44.551468 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8012bcae6988aa8eba322d56fe332b2c0a9c5dd20f7b6b24d232034d4cec2f37"} err="failed to get container status \"8012bcae6988aa8eba322d56fe332b2c0a9c5dd20f7b6b24d232034d4cec2f37\": rpc error: code = NotFound desc = an error occurred when try to find container \"8012bcae6988aa8eba322d56fe332b2c0a9c5dd20f7b6b24d232034d4cec2f37\": not found" Aug 13 00:52:44.551816 kubelet[2596]: I0813 00:52:44.551598 2596 scope.go:117] "RemoveContainer" containerID="32da1e52d2490f2c22fbf746ea77f1a6d15549eb1ac4083231fc43750fd78fa1" Aug 13 00:52:44.551982 containerd[1485]: time="2025-08-13T00:52:44.551923528Z" level=error msg="ContainerStatus for \"32da1e52d2490f2c22fbf746ea77f1a6d15549eb1ac4083231fc43750fd78fa1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"32da1e52d2490f2c22fbf746ea77f1a6d15549eb1ac4083231fc43750fd78fa1\": not found" Aug 13 00:52:44.552873 kubelet[2596]: E0813 00:52:44.552665 2596 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"32da1e52d2490f2c22fbf746ea77f1a6d15549eb1ac4083231fc43750fd78fa1\": not found" containerID="32da1e52d2490f2c22fbf746ea77f1a6d15549eb1ac4083231fc43750fd78fa1" Aug 13 00:52:44.555291 kubelet[2596]: I0813 00:52:44.552763 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"32da1e52d2490f2c22fbf746ea77f1a6d15549eb1ac4083231fc43750fd78fa1"} err="failed to get container status \"32da1e52d2490f2c22fbf746ea77f1a6d15549eb1ac4083231fc43750fd78fa1\": rpc error: code = NotFound desc = an error occurred when try to find container \"32da1e52d2490f2c22fbf746ea77f1a6d15549eb1ac4083231fc43750fd78fa1\": not found" Aug 13 00:52:44.555291 kubelet[2596]: I0813 00:52:44.554825 2596 scope.go:117] "RemoveContainer" containerID="9aa7d61d500c1a12383e3332b21acfa11fb111f34811635639610258e8751c68" Aug 13 00:52:44.555423 containerd[1485]: time="2025-08-13T00:52:44.555349799Z" level=error msg="ContainerStatus for \"9aa7d61d500c1a12383e3332b21acfa11fb111f34811635639610258e8751c68\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9aa7d61d500c1a12383e3332b21acfa11fb111f34811635639610258e8751c68\": not found" Aug 13 00:52:44.555947 kubelet[2596]: E0813 00:52:44.555926 2596 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9aa7d61d500c1a12383e3332b21acfa11fb111f34811635639610258e8751c68\": not found" containerID="9aa7d61d500c1a12383e3332b21acfa11fb111f34811635639610258e8751c68" Aug 13 00:52:44.556132 kubelet[2596]: I0813 00:52:44.556111 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9aa7d61d500c1a12383e3332b21acfa11fb111f34811635639610258e8751c68"} err="failed to get container status \"9aa7d61d500c1a12383e3332b21acfa11fb111f34811635639610258e8751c68\": rpc error: code = NotFound desc = an error occurred when try to find container \"9aa7d61d500c1a12383e3332b21acfa11fb111f34811635639610258e8751c68\": not found" Aug 13 00:52:44.556188 kubelet[2596]: I0813 00:52:44.556177 2596 scope.go:117] "RemoveContainer" containerID="6ca32e6ac460353813386cf1864695e4d9d04a6b4934bcfe2455e6e536733ceb" Aug 13 00:52:44.556843 containerd[1485]: time="2025-08-13T00:52:44.556810114Z" level=error msg="ContainerStatus for \"6ca32e6ac460353813386cf1864695e4d9d04a6b4934bcfe2455e6e536733ceb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6ca32e6ac460353813386cf1864695e4d9d04a6b4934bcfe2455e6e536733ceb\": not found" Aug 13 00:52:44.558698 kubelet[2596]: E0813 00:52:44.557801 2596 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6ca32e6ac460353813386cf1864695e4d9d04a6b4934bcfe2455e6e536733ceb\": not found" containerID="6ca32e6ac460353813386cf1864695e4d9d04a6b4934bcfe2455e6e536733ceb" Aug 13 00:52:44.558698 kubelet[2596]: I0813 00:52:44.557828 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6ca32e6ac460353813386cf1864695e4d9d04a6b4934bcfe2455e6e536733ceb"} err="failed to get container status \"6ca32e6ac460353813386cf1864695e4d9d04a6b4934bcfe2455e6e536733ceb\": rpc error: code = NotFound desc = an error occurred when try to find container \"6ca32e6ac460353813386cf1864695e4d9d04a6b4934bcfe2455e6e536733ceb\": not found" Aug 13 00:52:44.558698 kubelet[2596]: I0813 00:52:44.557844 2596 scope.go:117] "RemoveContainer" containerID="459c464e4a97d7599f95f7a05e7cbdab4e1a6daa3318b768c6b8eff1b3028bc6" Aug 13 00:52:44.558978 containerd[1485]: time="2025-08-13T00:52:44.558918681Z" level=error msg="ContainerStatus for \"459c464e4a97d7599f95f7a05e7cbdab4e1a6daa3318b768c6b8eff1b3028bc6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"459c464e4a97d7599f95f7a05e7cbdab4e1a6daa3318b768c6b8eff1b3028bc6\": not found" Aug 13 00:52:44.559797 kubelet[2596]: E0813 00:52:44.559779 2596 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"459c464e4a97d7599f95f7a05e7cbdab4e1a6daa3318b768c6b8eff1b3028bc6\": not found" containerID="459c464e4a97d7599f95f7a05e7cbdab4e1a6daa3318b768c6b8eff1b3028bc6" Aug 13 00:52:44.559868 kubelet[2596]: I0813 00:52:44.559851 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"459c464e4a97d7599f95f7a05e7cbdab4e1a6daa3318b768c6b8eff1b3028bc6"} err="failed to get container status \"459c464e4a97d7599f95f7a05e7cbdab4e1a6daa3318b768c6b8eff1b3028bc6\": rpc error: code = NotFound desc = an error occurred when try to find container \"459c464e4a97d7599f95f7a05e7cbdab4e1a6daa3318b768c6b8eff1b3028bc6\": not found" Aug 13 00:52:44.559915 kubelet[2596]: I0813 00:52:44.559905 2596 scope.go:117] "RemoveContainer" containerID="cdde5f52978612dffe3ffe1e07fb61865d356d997a8e4c964167f8fd44ed64a6" Aug 13 00:52:44.564606 containerd[1485]: time="2025-08-13T00:52:44.564573749Z" level=info msg="RemoveContainer for \"cdde5f52978612dffe3ffe1e07fb61865d356d997a8e4c964167f8fd44ed64a6\"" Aug 13 00:52:44.567414 containerd[1485]: time="2025-08-13T00:52:44.567389991Z" level=info msg="RemoveContainer for \"cdde5f52978612dffe3ffe1e07fb61865d356d997a8e4c964167f8fd44ed64a6\" returns successfully" Aug 13 00:52:44.567579 kubelet[2596]: I0813 00:52:44.567560 2596 scope.go:117] "RemoveContainer" containerID="cdde5f52978612dffe3ffe1e07fb61865d356d997a8e4c964167f8fd44ed64a6" Aug 13 00:52:44.567960 containerd[1485]: time="2025-08-13T00:52:44.567908382Z" level=error msg="ContainerStatus for \"cdde5f52978612dffe3ffe1e07fb61865d356d997a8e4c964167f8fd44ed64a6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cdde5f52978612dffe3ffe1e07fb61865d356d997a8e4c964167f8fd44ed64a6\": not found" Aug 13 00:52:44.568116 kubelet[2596]: E0813 00:52:44.568089 2596 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cdde5f52978612dffe3ffe1e07fb61865d356d997a8e4c964167f8fd44ed64a6\": not found" containerID="cdde5f52978612dffe3ffe1e07fb61865d356d997a8e4c964167f8fd44ed64a6" Aug 13 00:52:44.568201 kubelet[2596]: I0813 00:52:44.568183 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cdde5f52978612dffe3ffe1e07fb61865d356d997a8e4c964167f8fd44ed64a6"} err="failed to get container status \"cdde5f52978612dffe3ffe1e07fb61865d356d997a8e4c964167f8fd44ed64a6\": rpc error: code = NotFound desc = an error occurred when try to find container \"cdde5f52978612dffe3ffe1e07fb61865d356d997a8e4c964167f8fd44ed64a6\": not found" Aug 13 00:52:44.809862 kubelet[2596]: I0813 00:52:44.808796 2596 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ed7fd22-d5dc-4877-8b35-3a97e246932f" path="/var/lib/kubelet/pods/5ed7fd22-d5dc-4877-8b35-3a97e246932f/volumes" Aug 13 00:52:44.809862 kubelet[2596]: I0813 00:52:44.809582 2596 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deb77b33-8127-46b4-834c-dc204d34fbcd" path="/var/lib/kubelet/pods/deb77b33-8127-46b4-834c-dc204d34fbcd/volumes" Aug 13 00:52:44.990286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f29907847049d37f0ecf87b8cf02ffe4497525b53a378fc53d019dc93cc9684-rootfs.mount: Deactivated successfully. Aug 13 00:52:44.990416 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97-rootfs.mount: Deactivated successfully. Aug 13 00:52:44.990491 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97-shm.mount: Deactivated successfully. Aug 13 00:52:44.990582 systemd[1]: var-lib-kubelet-pods-deb77b33\x2d8127\x2d46b4\x2d834c\x2ddc204d34fbcd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxg5zr.mount: Deactivated successfully. Aug 13 00:52:44.990706 systemd[1]: var-lib-kubelet-pods-5ed7fd22\x2dd5dc\x2d4877\x2d8b35\x2d3a97e246932f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqp5gb.mount: Deactivated successfully. Aug 13 00:52:44.990797 systemd[1]: var-lib-kubelet-pods-5ed7fd22\x2dd5dc\x2d4877\x2d8b35\x2d3a97e246932f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:52:44.990893 systemd[1]: var-lib-kubelet-pods-5ed7fd22\x2dd5dc\x2d4877\x2d8b35\x2d3a97e246932f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:52:45.981028 sshd[4661]: Connection closed by 139.178.89.65 port 55844 Aug 13 00:52:45.981862 sshd-session[4659]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:45.989076 systemd[1]: sshd@53-172.234.29.142:22-139.178.89.65:55844.service: Deactivated successfully. Aug 13 00:52:45.992372 systemd[1]: session-54.scope: Deactivated successfully. Aug 13 00:52:45.994356 systemd-logind[1461]: Session 54 logged out. Waiting for processes to exit. Aug 13 00:52:45.995939 systemd-logind[1461]: Removed session 54. Aug 13 00:52:46.048970 systemd[1]: Started sshd@54-172.234.29.142:22-139.178.89.65:55848.service - OpenSSH per-connection server daemon (139.178.89.65:55848). Aug 13 00:52:46.391302 sshd[4821]: Accepted publickey for core from 139.178.89.65 port 55848 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:52:46.393045 sshd-session[4821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:46.397828 systemd-logind[1461]: New session 55 of user core. Aug 13 00:52:46.401854 systemd[1]: Started session-55.scope - Session 55 of User core. Aug 13 00:52:46.809805 kubelet[2596]: E0813 00:52:46.809759 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:52:47.140535 kubelet[2596]: I0813 00:52:47.140370 2596 memory_manager.go:355] "RemoveStaleState removing state" podUID="deb77b33-8127-46b4-834c-dc204d34fbcd" containerName="cilium-operator" Aug 13 00:52:47.140535 kubelet[2596]: I0813 00:52:47.140402 2596 memory_manager.go:355] "RemoveStaleState removing state" podUID="5ed7fd22-d5dc-4877-8b35-3a97e246932f" containerName="cilium-agent" Aug 13 00:52:47.150989 systemd[1]: Created slice kubepods-burstable-pod33f6f833_5413_4146_aa4c_c56f98b1be7e.slice - libcontainer container kubepods-burstable-pod33f6f833_5413_4146_aa4c_c56f98b1be7e.slice. Aug 13 00:52:47.175404 sshd[4823]: Connection closed by 139.178.89.65 port 55848 Aug 13 00:52:47.176391 sshd-session[4821]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:47.181416 systemd-logind[1461]: Session 55 logged out. Waiting for processes to exit. Aug 13 00:52:47.183230 systemd[1]: sshd@54-172.234.29.142:22-139.178.89.65:55848.service: Deactivated successfully. Aug 13 00:52:47.185978 systemd[1]: session-55.scope: Deactivated successfully. Aug 13 00:52:47.187354 systemd-logind[1461]: Removed session 55. Aug 13 00:52:47.205584 kubelet[2596]: I0813 00:52:47.205401 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/33f6f833-5413-4146-aa4c-c56f98b1be7e-etc-cni-netd\") pod \"cilium-4gt7j\" (UID: \"33f6f833-5413-4146-aa4c-c56f98b1be7e\") " pod="kube-system/cilium-4gt7j" Aug 13 00:52:47.205584 kubelet[2596]: I0813 00:52:47.205434 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33f6f833-5413-4146-aa4c-c56f98b1be7e-xtables-lock\") pod \"cilium-4gt7j\" (UID: \"33f6f833-5413-4146-aa4c-c56f98b1be7e\") " pod="kube-system/cilium-4gt7j" Aug 13 00:52:47.205584 kubelet[2596]: I0813 00:52:47.205456 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/33f6f833-5413-4146-aa4c-c56f98b1be7e-clustermesh-secrets\") pod \"cilium-4gt7j\" (UID: \"33f6f833-5413-4146-aa4c-c56f98b1be7e\") " pod="kube-system/cilium-4gt7j" Aug 13 00:52:47.205584 kubelet[2596]: I0813 00:52:47.205472 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/33f6f833-5413-4146-aa4c-c56f98b1be7e-hostproc\") pod \"cilium-4gt7j\" (UID: \"33f6f833-5413-4146-aa4c-c56f98b1be7e\") " pod="kube-system/cilium-4gt7j" Aug 13 00:52:47.205584 kubelet[2596]: I0813 00:52:47.205504 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/33f6f833-5413-4146-aa4c-c56f98b1be7e-cni-path\") pod \"cilium-4gt7j\" (UID: \"33f6f833-5413-4146-aa4c-c56f98b1be7e\") " pod="kube-system/cilium-4gt7j" Aug 13 00:52:47.205584 kubelet[2596]: I0813 00:52:47.205525 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zbr7\" (UniqueName: \"kubernetes.io/projected/33f6f833-5413-4146-aa4c-c56f98b1be7e-kube-api-access-8zbr7\") pod \"cilium-4gt7j\" (UID: \"33f6f833-5413-4146-aa4c-c56f98b1be7e\") " pod="kube-system/cilium-4gt7j" Aug 13 00:52:47.205796 kubelet[2596]: I0813 00:52:47.205557 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/33f6f833-5413-4146-aa4c-c56f98b1be7e-hubble-tls\") pod \"cilium-4gt7j\" (UID: \"33f6f833-5413-4146-aa4c-c56f98b1be7e\") " pod="kube-system/cilium-4gt7j" Aug 13 00:52:47.205796 kubelet[2596]: I0813 00:52:47.205599 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/33f6f833-5413-4146-aa4c-c56f98b1be7e-cilium-run\") pod \"cilium-4gt7j\" (UID: \"33f6f833-5413-4146-aa4c-c56f98b1be7e\") " pod="kube-system/cilium-4gt7j" Aug 13 00:52:47.205796 kubelet[2596]: I0813 00:52:47.205626 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/33f6f833-5413-4146-aa4c-c56f98b1be7e-host-proc-sys-net\") pod \"cilium-4gt7j\" (UID: \"33f6f833-5413-4146-aa4c-c56f98b1be7e\") " pod="kube-system/cilium-4gt7j" Aug 13 00:52:47.205796 kubelet[2596]: I0813 00:52:47.205656 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/33f6f833-5413-4146-aa4c-c56f98b1be7e-bpf-maps\") pod \"cilium-4gt7j\" (UID: \"33f6f833-5413-4146-aa4c-c56f98b1be7e\") " pod="kube-system/cilium-4gt7j" Aug 13 00:52:47.205796 kubelet[2596]: I0813 00:52:47.205685 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33f6f833-5413-4146-aa4c-c56f98b1be7e-cilium-config-path\") pod \"cilium-4gt7j\" (UID: \"33f6f833-5413-4146-aa4c-c56f98b1be7e\") " pod="kube-system/cilium-4gt7j" Aug 13 00:52:47.205796 kubelet[2596]: I0813 00:52:47.205702 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/33f6f833-5413-4146-aa4c-c56f98b1be7e-cilium-ipsec-secrets\") pod \"cilium-4gt7j\" (UID: \"33f6f833-5413-4146-aa4c-c56f98b1be7e\") " pod="kube-system/cilium-4gt7j" Aug 13 00:52:47.205943 kubelet[2596]: I0813 00:52:47.205717 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33f6f833-5413-4146-aa4c-c56f98b1be7e-lib-modules\") pod \"cilium-4gt7j\" (UID: \"33f6f833-5413-4146-aa4c-c56f98b1be7e\") " pod="kube-system/cilium-4gt7j" Aug 13 00:52:47.205943 kubelet[2596]: I0813 00:52:47.205732 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/33f6f833-5413-4146-aa4c-c56f98b1be7e-cilium-cgroup\") pod \"cilium-4gt7j\" (UID: \"33f6f833-5413-4146-aa4c-c56f98b1be7e\") " pod="kube-system/cilium-4gt7j" Aug 13 00:52:47.205943 kubelet[2596]: I0813 00:52:47.205748 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/33f6f833-5413-4146-aa4c-c56f98b1be7e-host-proc-sys-kernel\") pod \"cilium-4gt7j\" (UID: \"33f6f833-5413-4146-aa4c-c56f98b1be7e\") " pod="kube-system/cilium-4gt7j" Aug 13 00:52:47.242872 systemd[1]: Started sshd@55-172.234.29.142:22-139.178.89.65:55864.service - OpenSSH per-connection server daemon (139.178.89.65:55864). Aug 13 00:52:47.454784 kubelet[2596]: E0813 00:52:47.454421 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:52:47.455857 containerd[1485]: time="2025-08-13T00:52:47.455789972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4gt7j,Uid:33f6f833-5413-4146-aa4c-c56f98b1be7e,Namespace:kube-system,Attempt:0,}" Aug 13 00:52:47.478377 containerd[1485]: time="2025-08-13T00:52:47.478277341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:52:47.478377 containerd[1485]: time="2025-08-13T00:52:47.478338716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:52:47.478377 containerd[1485]: time="2025-08-13T00:52:47.478353547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:52:47.478526 containerd[1485]: time="2025-08-13T00:52:47.478430723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:52:47.501803 systemd[1]: Started cri-containerd-ecbbe057e9d426fc19cbfbdc3c444a9d6a5d447328cb4c44fdad17b5b659ac57.scope - libcontainer container ecbbe057e9d426fc19cbfbdc3c444a9d6a5d447328cb4c44fdad17b5b659ac57. Aug 13 00:52:47.528786 containerd[1485]: time="2025-08-13T00:52:47.528742023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4gt7j,Uid:33f6f833-5413-4146-aa4c-c56f98b1be7e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecbbe057e9d426fc19cbfbdc3c444a9d6a5d447328cb4c44fdad17b5b659ac57\"" Aug 13 00:52:47.529539 kubelet[2596]: E0813 00:52:47.529479 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:52:47.533283 containerd[1485]: time="2025-08-13T00:52:47.532896913Z" level=info msg="CreateContainer within sandbox \"ecbbe057e9d426fc19cbfbdc3c444a9d6a5d447328cb4c44fdad17b5b659ac57\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:52:47.544844 containerd[1485]: time="2025-08-13T00:52:47.544788868Z" level=info msg="CreateContainer within sandbox \"ecbbe057e9d426fc19cbfbdc3c444a9d6a5d447328cb4c44fdad17b5b659ac57\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0ac23abc046d81a0c9ec8c0e3fe8a5eb1935f3195d5b49b46e1752141b350f7b\"" Aug 13 00:52:47.545803 containerd[1485]: time="2025-08-13T00:52:47.545077040Z" level=info msg="StartContainer for \"0ac23abc046d81a0c9ec8c0e3fe8a5eb1935f3195d5b49b46e1752141b350f7b\"" Aug 13 00:52:47.560415 sshd[4835]: Accepted publickey for core from 139.178.89.65 port 55864 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:52:47.562518 sshd-session[4835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:47.569459 systemd-logind[1461]: New session 56 of user core. Aug 13 00:52:47.574799 systemd[1]: Started session-56.scope - Session 56 of User core. Aug 13 00:52:47.594815 systemd[1]: Started cri-containerd-0ac23abc046d81a0c9ec8c0e3fe8a5eb1935f3195d5b49b46e1752141b350f7b.scope - libcontainer container 0ac23abc046d81a0c9ec8c0e3fe8a5eb1935f3195d5b49b46e1752141b350f7b. Aug 13 00:52:47.621838 containerd[1485]: time="2025-08-13T00:52:47.621801572Z" level=info msg="StartContainer for \"0ac23abc046d81a0c9ec8c0e3fe8a5eb1935f3195d5b49b46e1752141b350f7b\" returns successfully" Aug 13 00:52:47.635550 systemd[1]: cri-containerd-0ac23abc046d81a0c9ec8c0e3fe8a5eb1935f3195d5b49b46e1752141b350f7b.scope: Deactivated successfully. Aug 13 00:52:47.664336 containerd[1485]: time="2025-08-13T00:52:47.664177202Z" level=info msg="shim disconnected" id=0ac23abc046d81a0c9ec8c0e3fe8a5eb1935f3195d5b49b46e1752141b350f7b namespace=k8s.io Aug 13 00:52:47.664336 containerd[1485]: time="2025-08-13T00:52:47.664227526Z" level=warning msg="cleaning up after shim disconnected" id=0ac23abc046d81a0c9ec8c0e3fe8a5eb1935f3195d5b49b46e1752141b350f7b namespace=k8s.io Aug 13 00:52:47.664336 containerd[1485]: time="2025-08-13T00:52:47.664235567Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:52:47.799104 sshd[4899]: Connection closed by 139.178.89.65 port 55864 Aug 13 00:52:47.800290 sshd-session[4835]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:47.803259 systemd[1]: sshd@55-172.234.29.142:22-139.178.89.65:55864.service: Deactivated successfully. Aug 13 00:52:47.805605 systemd[1]: session-56.scope: Deactivated successfully. Aug 13 00:52:47.807398 systemd-logind[1461]: Session 56 logged out. Waiting for processes to exit. Aug 13 00:52:47.809072 systemd-logind[1461]: Removed session 56. Aug 13 00:52:47.862863 systemd[1]: Started sshd@56-172.234.29.142:22-139.178.89.65:55880.service - OpenSSH per-connection server daemon (139.178.89.65:55880). Aug 13 00:52:47.944184 kubelet[2596]: E0813 00:52:47.944140 2596 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:52:48.193507 sshd[4954]: Accepted publickey for core from 139.178.89.65 port 55880 ssh2: RSA SHA256:YmHZo80jc6RY6+AOWfNddm8jK265B8RD33F9h1SpE+s Aug 13 00:52:48.197988 sshd-session[4954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:48.203202 systemd-logind[1461]: New session 57 of user core. Aug 13 00:52:48.207799 systemd[1]: Started session-57.scope - Session 57 of User core. Aug 13 00:52:48.529732 kubelet[2596]: E0813 00:52:48.528285 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:52:48.531748 containerd[1485]: time="2025-08-13T00:52:48.531364119Z" level=info msg="CreateContainer within sandbox \"ecbbe057e9d426fc19cbfbdc3c444a9d6a5d447328cb4c44fdad17b5b659ac57\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:52:48.545896 containerd[1485]: time="2025-08-13T00:52:48.545861904Z" level=info msg="CreateContainer within sandbox \"ecbbe057e9d426fc19cbfbdc3c444a9d6a5d447328cb4c44fdad17b5b659ac57\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2692b18d3e0d8a5e50996041413795a455a932659d91242dcfaf162448d6c74f\"" Aug 13 00:52:48.546714 containerd[1485]: time="2025-08-13T00:52:48.546577209Z" level=info msg="StartContainer for \"2692b18d3e0d8a5e50996041413795a455a932659d91242dcfaf162448d6c74f\"" Aug 13 00:52:48.579811 systemd[1]: Started cri-containerd-2692b18d3e0d8a5e50996041413795a455a932659d91242dcfaf162448d6c74f.scope - libcontainer container 2692b18d3e0d8a5e50996041413795a455a932659d91242dcfaf162448d6c74f. Aug 13 00:52:48.608759 containerd[1485]: time="2025-08-13T00:52:48.608165223Z" level=info msg="StartContainer for \"2692b18d3e0d8a5e50996041413795a455a932659d91242dcfaf162448d6c74f\" returns successfully" Aug 13 00:52:48.615652 systemd[1]: cri-containerd-2692b18d3e0d8a5e50996041413795a455a932659d91242dcfaf162448d6c74f.scope: Deactivated successfully. Aug 13 00:52:48.639110 containerd[1485]: time="2025-08-13T00:52:48.639045627Z" level=info msg="shim disconnected" id=2692b18d3e0d8a5e50996041413795a455a932659d91242dcfaf162448d6c74f namespace=k8s.io Aug 13 00:52:48.639110 containerd[1485]: time="2025-08-13T00:52:48.639102141Z" level=warning msg="cleaning up after shim disconnected" id=2692b18d3e0d8a5e50996041413795a455a932659d91242dcfaf162448d6c74f namespace=k8s.io Aug 13 00:52:48.639110 containerd[1485]: time="2025-08-13T00:52:48.639111342Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:52:49.311486 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2692b18d3e0d8a5e50996041413795a455a932659d91242dcfaf162448d6c74f-rootfs.mount: Deactivated successfully. Aug 13 00:52:49.531565 kubelet[2596]: E0813 00:52:49.531535 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:52:49.533951 containerd[1485]: time="2025-08-13T00:52:49.533923198Z" level=info msg="CreateContainer within sandbox \"ecbbe057e9d426fc19cbfbdc3c444a9d6a5d447328cb4c44fdad17b5b659ac57\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:52:49.555312 containerd[1485]: time="2025-08-13T00:52:49.555277200Z" level=info msg="CreateContainer within sandbox \"ecbbe057e9d426fc19cbfbdc3c444a9d6a5d447328cb4c44fdad17b5b659ac57\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9206335219815485b268147318e9389e3079d97d1204d6dcca40bef77d4acbcf\"" Aug 13 00:52:49.556778 containerd[1485]: time="2025-08-13T00:52:49.556696788Z" level=info msg="StartContainer for \"9206335219815485b268147318e9389e3079d97d1204d6dcca40bef77d4acbcf\"" Aug 13 00:52:49.585951 systemd[1]: run-containerd-runc-k8s.io-9206335219815485b268147318e9389e3079d97d1204d6dcca40bef77d4acbcf-runc.cOAbS1.mount: Deactivated successfully. Aug 13 00:52:49.592825 systemd[1]: Started cri-containerd-9206335219815485b268147318e9389e3079d97d1204d6dcca40bef77d4acbcf.scope - libcontainer container 9206335219815485b268147318e9389e3079d97d1204d6dcca40bef77d4acbcf. Aug 13 00:52:49.634125 containerd[1485]: time="2025-08-13T00:52:49.634030159Z" level=info msg="StartContainer for \"9206335219815485b268147318e9389e3079d97d1204d6dcca40bef77d4acbcf\" returns successfully" Aug 13 00:52:49.635396 systemd[1]: cri-containerd-9206335219815485b268147318e9389e3079d97d1204d6dcca40bef77d4acbcf.scope: Deactivated successfully. Aug 13 00:52:49.659244 containerd[1485]: time="2025-08-13T00:52:49.659191219Z" level=info msg="shim disconnected" id=9206335219815485b268147318e9389e3079d97d1204d6dcca40bef77d4acbcf namespace=k8s.io Aug 13 00:52:49.659244 containerd[1485]: time="2025-08-13T00:52:49.659242813Z" level=warning msg="cleaning up after shim disconnected" id=9206335219815485b268147318e9389e3079d97d1204d6dcca40bef77d4acbcf namespace=k8s.io Aug 13 00:52:49.659534 containerd[1485]: time="2025-08-13T00:52:49.659251604Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:52:49.807125 kubelet[2596]: E0813 00:52:49.807061 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:52:50.311545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9206335219815485b268147318e9389e3079d97d1204d6dcca40bef77d4acbcf-rootfs.mount: Deactivated successfully. Aug 13 00:52:50.535351 kubelet[2596]: E0813 00:52:50.535294 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:52:50.542714 containerd[1485]: time="2025-08-13T00:52:50.542566030Z" level=info msg="CreateContainer within sandbox \"ecbbe057e9d426fc19cbfbdc3c444a9d6a5d447328cb4c44fdad17b5b659ac57\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:52:50.558588 containerd[1485]: time="2025-08-13T00:52:50.558485542Z" level=info msg="CreateContainer within sandbox \"ecbbe057e9d426fc19cbfbdc3c444a9d6a5d447328cb4c44fdad17b5b659ac57\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"633c935b79258ad49b19fddca6e9684d86d440104f05835679b30c672366fc55\"" Aug 13 00:52:50.559142 containerd[1485]: time="2025-08-13T00:52:50.559086877Z" level=info msg="StartContainer for \"633c935b79258ad49b19fddca6e9684d86d440104f05835679b30c672366fc55\"" Aug 13 00:52:50.596818 systemd[1]: Started cri-containerd-633c935b79258ad49b19fddca6e9684d86d440104f05835679b30c672366fc55.scope - libcontainer container 633c935b79258ad49b19fddca6e9684d86d440104f05835679b30c672366fc55. Aug 13 00:52:50.633980 systemd[1]: cri-containerd-633c935b79258ad49b19fddca6e9684d86d440104f05835679b30c672366fc55.scope: Deactivated successfully. Aug 13 00:52:50.634535 containerd[1485]: time="2025-08-13T00:52:50.634492850Z" level=info msg="StartContainer for \"633c935b79258ad49b19fddca6e9684d86d440104f05835679b30c672366fc55\" returns successfully" Aug 13 00:52:50.656208 containerd[1485]: time="2025-08-13T00:52:50.656141761Z" level=info msg="shim disconnected" id=633c935b79258ad49b19fddca6e9684d86d440104f05835679b30c672366fc55 namespace=k8s.io Aug 13 00:52:50.656444 containerd[1485]: time="2025-08-13T00:52:50.656238868Z" level=warning msg="cleaning up after shim disconnected" id=633c935b79258ad49b19fddca6e9684d86d440104f05835679b30c672366fc55 namespace=k8s.io Aug 13 00:52:50.656444 containerd[1485]: time="2025-08-13T00:52:50.656249059Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:52:50.953250 kubelet[2596]: I0813 00:52:50.953176 2596 setters.go:602] "Node became not ready" node="172-234-29-142" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T00:52:50Z","lastTransitionTime":"2025-08-13T00:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 00:52:51.311500 systemd[1]: run-containerd-runc-k8s.io-633c935b79258ad49b19fddca6e9684d86d440104f05835679b30c672366fc55-runc.BITNaY.mount: Deactivated successfully. Aug 13 00:52:51.311647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-633c935b79258ad49b19fddca6e9684d86d440104f05835679b30c672366fc55-rootfs.mount: Deactivated successfully. Aug 13 00:52:51.538877 kubelet[2596]: E0813 00:52:51.538360 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:52:51.540438 containerd[1485]: time="2025-08-13T00:52:51.540408084Z" level=info msg="CreateContainer within sandbox \"ecbbe057e9d426fc19cbfbdc3c444a9d6a5d447328cb4c44fdad17b5b659ac57\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:52:51.556135 containerd[1485]: time="2025-08-13T00:52:51.555855219Z" level=info msg="CreateContainer within sandbox \"ecbbe057e9d426fc19cbfbdc3c444a9d6a5d447328cb4c44fdad17b5b659ac57\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"66fa7f17e542604af790fb26536561d9e3cf630756310fa16712ab666fd8673d\"" Aug 13 00:52:51.558492 containerd[1485]: time="2025-08-13T00:52:51.558026910Z" level=info msg="StartContainer for \"66fa7f17e542604af790fb26536561d9e3cf630756310fa16712ab666fd8673d\"" Aug 13 00:52:51.587796 systemd[1]: Started cri-containerd-66fa7f17e542604af790fb26536561d9e3cf630756310fa16712ab666fd8673d.scope - libcontainer container 66fa7f17e542604af790fb26536561d9e3cf630756310fa16712ab666fd8673d. Aug 13 00:52:51.619229 containerd[1485]: time="2025-08-13T00:52:51.619186067Z" level=info msg="StartContainer for \"66fa7f17e542604af790fb26536561d9e3cf630756310fa16712ab666fd8673d\" returns successfully" Aug 13 00:52:52.053719 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 00:52:52.541795 kubelet[2596]: E0813 00:52:52.541768 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:52:53.543394 kubelet[2596]: E0813 00:52:53.543349 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:52:54.160117 kubelet[2596]: I0813 00:52:54.160076 2596 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:52:54.160117 kubelet[2596]: I0813 00:52:54.160117 2596 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:52:54.161608 containerd[1485]: time="2025-08-13T00:52:54.161566502Z" level=info msg="StopPodSandbox for \"c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97\"" Aug 13 00:52:54.161938 containerd[1485]: time="2025-08-13T00:52:54.161778018Z" level=info msg="TearDown network for sandbox \"c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97\" successfully" Aug 13 00:52:54.161938 containerd[1485]: time="2025-08-13T00:52:54.161820991Z" level=info msg="StopPodSandbox for \"c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97\" returns successfully" Aug 13 00:52:54.162368 containerd[1485]: time="2025-08-13T00:52:54.162327337Z" level=info msg="RemovePodSandbox for \"c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97\"" Aug 13 00:52:54.162476 containerd[1485]: time="2025-08-13T00:52:54.162451306Z" level=info msg="Forcibly stopping sandbox \"c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97\"" Aug 13 00:52:54.162581 containerd[1485]: time="2025-08-13T00:52:54.162533912Z" level=info msg="TearDown network for sandbox \"c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97\" successfully" Aug 13 00:52:54.165714 containerd[1485]: time="2025-08-13T00:52:54.165648107Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:52:54.165714 containerd[1485]: time="2025-08-13T00:52:54.165696561Z" level=info msg="RemovePodSandbox \"c7b41c268e59ce332e5b5063e6358fa6128d9ddc62efd0ddb2e8540a54575d97\" returns successfully" Aug 13 00:52:54.166076 containerd[1485]: time="2025-08-13T00:52:54.166042195Z" level=info msg="StopPodSandbox for \"9f29907847049d37f0ecf87b8cf02ffe4497525b53a378fc53d019dc93cc9684\"" Aug 13 00:52:54.166245 containerd[1485]: time="2025-08-13T00:52:54.166183086Z" level=info msg="TearDown network for sandbox \"9f29907847049d37f0ecf87b8cf02ffe4497525b53a378fc53d019dc93cc9684\" successfully" Aug 13 00:52:54.166245 containerd[1485]: time="2025-08-13T00:52:54.166235699Z" level=info msg="StopPodSandbox for \"9f29907847049d37f0ecf87b8cf02ffe4497525b53a378fc53d019dc93cc9684\" returns successfully" Aug 13 00:52:54.166589 containerd[1485]: time="2025-08-13T00:52:54.166559553Z" level=info msg="RemovePodSandbox for \"9f29907847049d37f0ecf87b8cf02ffe4497525b53a378fc53d019dc93cc9684\"" Aug 13 00:52:54.166635 containerd[1485]: time="2025-08-13T00:52:54.166599736Z" level=info msg="Forcibly stopping sandbox \"9f29907847049d37f0ecf87b8cf02ffe4497525b53a378fc53d019dc93cc9684\"" Aug 13 00:52:54.166699 containerd[1485]: time="2025-08-13T00:52:54.166640519Z" level=info msg="TearDown network for sandbox \"9f29907847049d37f0ecf87b8cf02ffe4497525b53a378fc53d019dc93cc9684\" successfully" Aug 13 00:52:54.169356 containerd[1485]: time="2025-08-13T00:52:54.169311282Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9f29907847049d37f0ecf87b8cf02ffe4497525b53a378fc53d019dc93cc9684\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:52:54.169356 containerd[1485]: time="2025-08-13T00:52:54.169355195Z" level=info msg="RemovePodSandbox \"9f29907847049d37f0ecf87b8cf02ffe4497525b53a378fc53d019dc93cc9684\" returns successfully" Aug 13 00:52:54.171554 kubelet[2596]: I0813 00:52:54.171490 2596 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:52:54.173694 kubelet[2596]: I0813 00:52:54.173426 2596 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c" size=18897442 runtimeHandler="" Aug 13 00:52:54.173748 containerd[1485]: time="2025-08-13T00:52:54.173630694Z" level=info msg="RemoveImage \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 00:52:54.174569 containerd[1485]: time="2025-08-13T00:52:54.174523638Z" level=info msg="ImageDelete event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:52:54.175799 containerd[1485]: time="2025-08-13T00:52:54.175759177Z" level=info msg="ImageDelete event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 00:52:54.195694 containerd[1485]: time="2025-08-13T00:52:54.195387945Z" level=info msg="RemoveImage \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" returns successfully" Aug 13 00:52:54.209054 kubelet[2596]: I0813 00:52:54.209021 2596 eviction_manager.go:383] "Eviction manager: able to reduce resource pressure without evicting pods." resourceName="ephemeral-storage" Aug 13 00:52:54.624950 systemd[1]: run-containerd-runc-k8s.io-66fa7f17e542604af790fb26536561d9e3cf630756310fa16712ab666fd8673d-runc.aHmNik.mount: Deactivated successfully. Aug 13 00:52:54.941776 systemd-networkd[1389]: lxc_health: Link UP Aug 13 00:52:54.961104 systemd-networkd[1389]: lxc_health: Gained carrier Aug 13 00:52:55.456992 kubelet[2596]: E0813 00:52:55.456203 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:52:55.471788 kubelet[2596]: I0813 00:52:55.471746 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4gt7j" podStartSLOduration=8.471734369 podStartE2EDuration="8.471734369s" podCreationTimestamp="2025-08-13 00:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:52:52.552824525 +0000 UTC m=+339.855409602" watchObservedRunningTime="2025-08-13 00:52:55.471734369 +0000 UTC m=+342.774319446" Aug 13 00:52:55.548193 kubelet[2596]: E0813 00:52:55.548145 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:52:56.549550 kubelet[2596]: E0813 00:52:56.549434 2596 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 00:52:56.652789 systemd-networkd[1389]: lxc_health: Gained IPv6LL Aug 13 00:52:58.906044 systemd[1]: run-containerd-runc-k8s.io-66fa7f17e542604af790fb26536561d9e3cf630756310fa16712ab666fd8673d-runc.vAuYTc.mount: Deactivated successfully. Aug 13 00:53:01.029082 systemd[1]: run-containerd-runc-k8s.io-66fa7f17e542604af790fb26536561d9e3cf630756310fa16712ab666fd8673d-runc.8tB7tr.mount: Deactivated successfully. Aug 13 00:53:01.148425 sshd[4956]: Connection closed by 139.178.89.65 port 55880 Aug 13 00:53:01.149150 sshd-session[4954]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:01.154908 systemd-logind[1461]: Session 57 logged out. Waiting for processes to exit. Aug 13 00:53:01.155440 systemd[1]: sshd@56-172.234.29.142:22-139.178.89.65:55880.service: Deactivated successfully. Aug 13 00:53:01.158408 systemd[1]: session-57.scope: Deactivated successfully. Aug 13 00:53:01.159979 systemd-logind[1461]: Removed session 57.